Title of Competition
stringclasses
343 values
Title of Writeup
stringlengths
6
139
User
stringclasses
1 value
Writeup
stringlengths
419
80.4k
Kaggle - LLM Science Exam
70th place solution (PB:0.907) and PB:0.911 solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you to everyone involved in this competition.<br> And this competition was very fierce and exciting.<br> Our solution centers around training on <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>'s 60k dataset and retrieval with <a href="https://www.kaggle.com/mbanaei" target="_blank">@mbanaei</a>'s 270k articles.</p> <h1>PB:0.907 solution</h1> <h4>1. Models</h4> <p>We created an ensemble of models with four codenames. itk16, cody, jonge and itk17. All models used are DebertaV3L. Of these, cody and jonge use <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>'s <a href="https://www.kaggle.com/code/cdeotte/how-to-train-open-book-model-part-1" target="_blank">baseline</a>. <br> <a href="https://www.kaggle.com/domdejonge" target="_blank">@domdejonge</a>'s model (jonge) used <strong>label smoothing=0.1</strong>. It works a little.</p> <p>itk16,17 processes the choices as follows, allowing model to answer while comparing them with other choices.</p> <pre><code>option_A = [df[] + rand_perm(df[] ~ df[])] option_B = [df[] + rand_perm(df[] ~ df[])] ... option_E = [df[] + rand_perm(df[] ~ df[])] </code></pre> <p>Here, rand_perm is a function that randomly rearranges the order. This also has the effect of data augmentation. But It didn't have much effect.</p> <p>also, The two models were trained using <a href="https://arxiv.org/abs/1605.07725" target="_blank">Fast Gradient Method (FGM)</a> adversarial training. <strong>FGM was more powerful than label smoothing. CV was improved about 0.01~0.02.</strong></p> <h4>2. Retrieval</h4> <p>2 types of context were used. Applying the context retrieved in <a href="https://www.kaggle.com/mbanaei" target="_blank">@mbanaei</a>'s <a href="https://www.kaggle.com/code/mbanaei/86-2-with-only-270k-articles" target="_blank">270k article</a> to the model created by these and inferring it will significantly increase the score. For example, an itk16 model trained on chris' 60k goes from <strong>CV 0.9083 to 0.9783</strong> on a training dataset of 200 samples. <strong>It's Just change LongFormer to Deberta!</strong><br> It's amazing how score can go up with zero shots. But we do not understand the reason for this. <br> Our teammate <a href="https://www.kaggle.com/domdejonge" target="_blank">@domdejonge</a> showed that changing n_range in TfidfVectorizer increased LB, but had no effect on PB.</p> <h4>3. Inference and Ensemble</h4> <p>We had a hard time making inferences work. We realized too late that the cause of the error was a VRAM OOM error due to the token length being too long. In the end, we chose to branch using try and except statements.<br> Since we concluded empirically that <strong>LB (or PB) is better at maximizing MAP@3 than minimizing loss</strong>, but we used simple averaging to weight the model for robustness. </p> <h4>4. Submission</h4> <p>We also had a hard time submitting it. We kept getting errors until <a href="https://www.kaggle.com/itsuki9180" target="_blank">@itsuki9180</a> found a <a href="https://www.kaggle.com/discussions/product-feedback/276051" target="_blank">discussion showing that when output exceeds 500</a>, Submission CSV not Found appears.</p> <h4>5. Overall Result</h4> <p>Although about half of the submission was consumed by errors, we managed to make it a reality. The model results are as follows. The 4 models and 2 contexts produce a total of 8 predicted values.</p> <table> <thead> <tr> <th>model_name</th> <th>dataset</th> <th>map@3 200</th> <th>Loss 200</th> <th>weight</th> </tr> </thead> <tbody> <tr> <td>itk16_context1</td> <td>Chris'60k</td> <td>0.9600</td> <td>0.2481</td> <td>0.125</td> </tr> <tr> <td>itk16_context2</td> <td>Chris'60k</td> <td>0.9783</td> <td>0.1457</td> <td>0.125</td> </tr> <tr> <td>cody_context1</td> <td>Chris'60k</td> <td>0.9650</td> <td>0.3156</td> <td>0.125</td> </tr> <tr> <td>cody_context1</td> <td>Chris'60k</td> <td>0.9850</td> <td>0.2207</td> <td>0.125</td> </tr> <tr> <td>jonge_context1</td> <td>Chris'60k</td> <td>0.9616</td> <td>0.4208</td> <td>0.125</td> </tr> <tr> <td>jonge_context2</td> <td>Chris'60k</td> <td>0.9741</td> <td>0.3846</td> <td>0.125</td> </tr> <tr> <td>itk17_context1</td> <td>Chris'60k</td> <td>0.9666</td> <td>0.2356</td> <td>0.125</td> </tr> <tr> <td>itk17_context2</td> <td>Chris'60k</td> <td>0.9825</td> <td>0.1467</td> <td>0.125</td> </tr> <tr> <td>---</td> <td>---</td> <td>---</td> <td>---</td> <td>---</td> </tr> <tr> <td>ensembled</td> <td>-</td> <td>0.9916</td> <td>0.2386</td> <td>1.0</td> </tr> </tbody> </table> <p><strong>LB was 0.920 and PB is 0.907.</strong></p> <p>As a bonus, We've included some additional information below</p> <h1>PB:0.911 solution</h1> <p>Our best submission was PB0.911. we'll give you a quick introduction.<br> Remove itk17 model from submission of 0.907.<br> We used Chris's 60k for the dataset, but change the context to the one introduced by <a href="https://www.kaggle.com/mbanaei" target="_blank">@mbanaei</a> instead. Let the model trained in context 2 (retrieved_articles_parsed). This is called itk30. <br> This model is not better than Zero Shot, but it does provide more versatility.</p> <table> <thead> <tr> <th>model_name</th> <th>dataset</th> <th>map@3 200</th> <th>Loss 200</th> <th>weight</th> </tr> </thead> <tbody> <tr> <td>itk30_context1</td> <td>Chris'60k + replaced context2</td> <td>0.9558</td> <td>0.2131</td> <td>0.125</td> </tr> <tr> <td>itk30_context2</td> <td>Chris'60k + replaced context2</td> <td>0.9766</td> <td>0.1475</td> <td>0.125</td> </tr> <tr> <td>---</td> <td>---</td> <td>---</td> <td>---</td> <td>---</td> </tr> <tr> <td>ensembled</td> <td>-</td> <td>0.9891</td> <td>0.2368</td> <td>1.0</td> </tr> </tbody> </table> <p><strong>LB was 0.910 and PB is 0.911.</strong></p> <h4>optimising MAP@3</h4> <p>It is difficult to optimize map3 directly. <a href="https://www.kaggle.com/code/itsuki9180/llm-sciex-optimise-ensemble-weights" target="_blank">We proposed loss optimization</a>, but we also suggest a search method using random values. There is an element of anxiety as it overfits CV, but in this competition, CV, LB, and PB all worked together. I post the code when there are 8 weights to optimize.</p> <pre><code> (): np.random.seed(seed) best = es_step = w1b, w2b, w3b, w4b, w5b, w6b, w7b, w8b = -, -, -, -, -, -, -, - t tqdm((trial)): ws = np.random.rand() ws /= np.(ws) pred = i (): pred += preds[i] * ws[i] overall_score = calc_mtr(pred) overall_score &gt; best: best = overall_score best_loss = log_loss_numpy(pred) w1b = ws[]; w2b = ws[]; w3b = ws[]; w4b = ws[]; w5b = ws[]; w6b = ws[]; w7b = ws[]; w8b = ws[] () () es_step = : es_step += es_step&gt;=es: w1b, w2b, w3b, w4b, w5b, w6b, w7b, w8b </code></pre> <h4>FGM and Other adversarial trainings</h4> <p>We used <a href="https://www.kaggle.com/takamichitoda" target="_blank">@takamichitoda</a>'s <a href="https://www.ai-shift.co.jp/techblog/2985" target="_blank">implementation of FGM (Japanese Language).</a><br> AWP was mentioned <a href="https://www.kaggle.com/code/itsuki9180/introducing-adversarial-weight-perturbation-awp" target="_blank">in my public notebook</a>, but it had no effect.</p> <h4>Our codes (We will add more from time to time)</h4> <p><a href="https://www.kaggle.com/itsuki9180" target="_blank">@itsuki9180</a>'s <a href="https://www.kaggle.com/code/itsuki9180/70th-place-solution-itk16-training" target="_blank">itk16 FGM training Notebook</a><br> <a href="https://www.kaggle.com/domdejonge" target="_blank">@domdejonge</a>'s <a href="https://www.kaggle.com/code/domdejonge/label-smoothing-train" target="_blank">label_smoothing w/HF🤗 Trainer Notebook</a><br> <a href="https://www.kaggle.com/itsuki9180" target="_blank">@itsuki9180</a>'s <a href="https://www.kaggle.com/code/itsuki9180/private-lb-0-911-submission" target="_blank">PB:0.911 sub Notebook</a></p> <h1>Conclusion</h1> <p>Anyway, congratulations to all the winners! And the people who didn't give up on this competition until the end, don't forget that you got here are your bread and butter. Kaggle is not only a place for winning, but also a place for learning.<br> See you at the next competition!</p>
Kaggle - LLM Science Exam
7th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi Chan Kha Vu, thanks!<br> As you say, our datasets do not properly parse formulas and numbers.<br> I believe that the MB's STEM 270K dataset adequately supplements that point.</p> <p>I've published the CausalLM model weights (LoRa adapter weights) and inference code <a href="https://www.kaggle.com/datasets/takashisomeya/llm-science-exam-lora-weights" target="_blank">here</a>.<br> My teammate <a href="https://www.kaggle.com/kashiwaba" target="_blank">@kashiwaba</a> is also planning to release the CausalLM Reward Modeling weights soon and these are even more promising in terms of LB.<br> Hope you'll get good results with our LLM!</p> <p>Thanks to Kaggle staff for organizing the brand new and interesting competition.<br> I really learned a lot of new things through this competition, including RAG and fine-tuning of LLM. I would like to thank all those who contributed meaningful discussions and published high-level notebooks during the competition.<br> I would also like to thank my teammates <a href="https://www.kaggle.com/thedrcat" target="_blank">@thedrcat</a>, <a href="https://www.kaggle.com/anonamename" target="_blank">@anonamename</a> and <a href="https://www.kaggle.com/kashiwaba" target="_blank">@kashiwaba</a> for refining our solution with a variety of ideas.</p> <h1>Overview</h1> <ul> <li>Use contexts created from various wikipedia data.</li> <li>Ensemble of DeBERTa-V3 large models and LLMs.</li> <li>Multi-stage strategy: incorporating various combinations of contexts and models by reducing the number of questions to be predicted at the later stage.</li> </ul> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6102861%2Fb579b81a690fa6d046945483570c4cdf%2Fllm-overview-v4.png?generation=1697287757990102&amp;alt=media" alt=""></p> <h1>Retrieval part</h1> <h3>Wikipedia-20230801 (article retrieve -&gt; sentence retrieve)</h3> <p>This search strategy is based on the method used in JJ's <a href="https://www.kaggle.com/code/jjinho/open-book-llm-science-exam" target="_blank">public notebook</a>, but with the following changes:</p> <ul> <li>We use <a href="https://www.kaggle.com/datasets/bwandowando/wikipedia-index-and-plaintext-20230801/versions/2" target="_blank">Wikipedia-20230801-dump</a>. This contains more articles (6.67M) than <a href="https://www.kaggle.com/datasets/jjinho/wikipedia-20230701" target="_blank">Wikipedia Plaintext (2023-07-01)</a> (6.27M).</li> <li>For the article retrieve, we used the embeddings consisting of the full text of the article (for each article, we split all text in an article into chunks of 256 tokens).</li> <li>We use gte-small for creating the above embeddings.</li> <li>We have modified the sectionize_documents function for better sententialization.</li> <li>We use TF-IDF for the sentence retrieve.</li> </ul> <h3>MB 270K + TF-IDF</h3> <p>We use the MB's 270K dataset and TF-IDF method from <a href="https://www.kaggle.com/code/mbanaei/86-2-with-only-270k-articles" target="_blank">MB's great notebook</a>.</p> <h3>MB 270K + sentence-transformer</h3> <p>We search for MB's 270K text using gte-small with max_seq_length=512</p> <h3>Cohere TF-IDF</h3> <p>This method is based on the hypothesis that paragraph-based search and sparse method may be better than dense methods specifically for the competition data. The questions and options often contain very specific terms which might not be well represented via dense embeddings. Also, paragraph-based search is generally considered more effective for question answering. The challenge is how to effectively perform sparse retrieval on many millions of Wikipedia paragraphs.</p> <ul> <li>We start with Cohere en-wiki dataset via HF. It's not the latest dump, which might cause some problems as some articles changed, but it was conveniently split into paragraphs. It also suffers from the problem mentioned on the forums, some LUA numbers/expressions are not properly parsed. </li> <li>We use BERT tokenizer and vocabulary for TF-IDF to keep a reasonably sized vocab. We train TF-IDF vectorizer on the first 10M paragraphs and use that vectorizer for all paragraphs.</li> <li>We precompute indexes and store both indexes and paragraphs in kaggle datasets, in 4x batches of 10 million rows</li> <li>We use <a href="https://github.com/ing-bank/sparse_dot_topn" target="_blank">fast sparse KNN lookup</a>, co-developed and previously shared by Ahmet Erdem, to find top 5 nearest neighbors context paragraphs for each question. </li> </ul> <h1>Model part</h1> <h2>Validation</h2> <p>Initially, we used the samples provided in train.csv for validation, but after achieving CV ~ 0.99, it became challenging to assess the correlation between CV and LB. Therefore, we generated approximately 3000 additional samples by Chat-GPT and used these as new validation set. We used the <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/425106" target="_blank">130K STEM articles</a> as Chat-GPT input.<br> Although there is some blurring, CV and LB correlate relatively well, and the combination of models in the final ensemble was determined based on the CV in this validation set.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6102861%2F50ab5d20ecab000cb86c96947e62ba54%2Fllm_cv_vs_lb.jpg?generation=1697277999186303&amp;alt=media"></p> <h2>DeBERTa-V3 large</h2> <p>Based on <a href="https://www.kaggle.com/code/cdeotte/how-to-train-open-book-model-part-1" target="_blank">cderotte's notebook</a>, we train the following four models.</p> <ul> <li>max_length=512, microsoft/deberta-v3-large (x2, we will refer to these as v1 and v2 in the Ensemble section)</li> <li>max_length=512, deepset/deberta-v3-large-squad2 (x1, we will refer to it as v3)</li> <li>max_length=1024, microsoft/deberta-v3-large (x1)</li> </ul> <h2>LLM</h2> <h3>CausalLM</h3> <p>We used AutoModelForCausalLM class from transformers and SFTTrainer class from trl for LLM training.</p> <h4>Training</h4> <ul> <li>We trained the model to output one of the letters A, B, C, D, or E with the following prompt</li> </ul> <pre><code>### Input: &lt;context&gt;\n\n### System: Answer the following multiple choice question by giving the most appropriate response. Answer should be one among [A, B, C, D, E]. Use the input text above a reference for your answers needed.### Question: &lt;prompt&gt;\nA) &lt;option A&gt;\nB) &lt;option B&gt;\nC) &lt;option C&gt;\nD) &lt;option D&gt;\nE) &lt;option E&gt;\n\n### Answer: </code></pre> <ul> <li>Use QLoRa for memory saving.</li> <li>Shuffling the choices during the training worked a bit.</li> </ul> <h4>Inference</h4> <ul> <li>max_new_tokens=1</li> <li>We put the logits of the [A,B,C,D,E] tokens of the first output into softmax function to obtain the probability of each choice.</li> </ul> <h3>CausalLM Reward Modeling</h3> <ul> <li>Modified trl's RewardTrainer class to train AutoModelForCausalLM with Reward Modeling. </li> <li>Input is in the following format<br> <code>&lt;context&gt; #### &lt;prompt&gt; #### &lt;option&gt; ####</code></li> <li>Generate pairs with the correct choice as "chosen" and any other choice as "rejected" (this is the same as normal Reward Modeling).</li> <li>Use the predicted value of the next "yes" token in the input string as the logit to be compared.</li> <li>LLMs that do not support AutoModelForSequenceClassification can now be trained with Reward Modeling, and can be combined with normal CausalLM models to ensemble more diverse LLM models.</li> </ul> <h3>Single Model Results</h3> <p>context : Wikipedia-20230801</p> <table> <thead> <tr> <th>Model</th> <th>Type</th> <th>Private LB</th> <th>Public LB</th> </tr> </thead> <tbody> <tr> <td>Mistral-7B-v0.1</td> <td>CausalLM</td> <td>0.874</td> <td>0.868</td> </tr> <tr> <td>Mistral-7B-OpenOrca</td> <td>CausalLM</td> <td>0.882</td> <td>0.876</td> </tr> <tr> <td>Mistral-7B-v0.1</td> <td>CausalLM Reward Modeling</td> <td>0.897</td> <td>0.888</td> </tr> <tr> <td>Mistral-7B-OpenOrca</td> <td>CausalLM Reward Modeling</td> <td>0.896</td> <td>0.888</td> </tr> <tr> <td>OpenOrca-Platypus2-13B</td> <td>CausalLM</td> <td>0.872</td> <td>0.880</td> </tr> <tr> <td>Llama2-chat-AYT-13B</td> <td>CausalLM</td> <td>0.872</td> <td>0.875</td> </tr> </tbody> </table> <h1>Ensemble</h1> <p>The problem with LLM is its long inference time. Therefore, we first use Deberta models to filter out easy questions, which we define as questions with a high prediction probability (max probability &gt; 0.7), and then use LLMs to infer only remaining questions. In this way, the inference time is reduced by narrowing down the data to be inferred with LLM. We also applied this step-by-step inference to Deberta models to keep the inference within 9 hours. This ensemble method allowed us to add 3 LLMs to the 9 hours of inference. The final prediction is a weighted average of 8 Deberta models and 3 LLMs across different contexts. We assigned higher weights to LLMs than to Deberta models. The weights were determined from map3 of the validation set.</p> <h3>Final submission</h3> <p>Private LB : 0.925, Public LB : 0.931</p> <table> <thead> <tr> <th>Model</th> <th>Context</th> <th>Stage</th> <th>Weight</th> </tr> </thead> <tbody> <tr> <td>DeBERTa-V3 large (max_length=512), v1</td> <td>Wikipedia-20230801</td> <td>1st</td> <td>1.0</td> </tr> <tr> <td>DeBERTa-V3 large (max_length=512), v1</td> <td>MB 270K + TF-IDF</td> <td>2nd</td> <td>1.0</td> </tr> <tr> <td>DeBERTa-V3 large (max_length=512), v2</td> <td>Wikipedia-20230801</td> <td>3rd</td> <td>0.5</td> </tr> <tr> <td>DeBERTa-V3 large (max_length=512), v2</td> <td>MB 270k + sentence-transformer</td> <td>3rd</td> <td>0.5</td> </tr> <tr> <td>DeBERTa-V3 large (max_length=1024)</td> <td>Wikipedia-20230801</td> <td>3rd</td> <td>0.5</td> </tr> <tr> <td>DeBERTa-V3 large (max_length=1024)</td> <td>MB 270k + sentence-transformer</td> <td>3rd</td> <td>0.5</td> </tr> <tr> <td>DeBERTa-V3 large (max_length=512), v3</td> <td>Cohere TF-IDF</td> <td>3rd</td> <td>0.5</td> </tr> <tr> <td>DeBERTa-V3 large (max_length=512), v3</td> <td>MB 270K + TF-IDF</td> <td>3rd</td> <td>0.5</td> </tr> <tr> <td>Mistral-7B-v0.1, CausalLM Reward Modeling</td> <td>Wikipedia-20230801</td> <td>4th</td> <td>1.25</td> </tr> <tr> <td>Mistral-7B-v0.1, CausalLM Reward Modeling</td> <td>MB 270K + TF-IDF</td> <td>4th</td> <td>1.5</td> </tr> <tr> <td>OpenOrca-Platypus2-13B, CausalLM</td> <td>Wikipedia-20230801</td> <td>4th</td> <td>1.25</td> </tr> </tbody> </table>
Kaggle - LLM Science Exam
82th Place Solution for the Kaggle - LLM Science Exam Competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congrats with Kaggle Competitions Master achievement!</p> <p>First of all, I'd like to thank the organizers and congratulations to all the winners!<br> Thanks to everyone who shared code, datasets, and ideas. Especially to <a href="https://www.kaggle.com/jjinho" target="_blank">@jjinho</a> who shared an open book approach. </p> <h1>Context</h1> <ul> <li>Business context: <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/overview" target="_blank">https://www.kaggle.com/competitions/kaggle-llm-science-exam/overview</a></li> <li>Data context: <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/data" target="_blank">https://www.kaggle.com/competitions/kaggle-llm-science-exam/data</a></li> </ul> <h1>Overview of the Approach</h1> <ul> <li>Self crafted wikipedia dataset (see Wikipedia data section)</li> <li>Similarity search using bge-base/bge-large embeddings</li> <li>Deberta ensemble</li> </ul> <h1>Wikipedia data</h1> <p>I found that publicly available wiki dumps have missing text and values due to html tags/lua macroses removals. I tried many different processing repos and try to implement my own approach but it was a waste of time. So my final solution:</p> <ul> <li>Run private mediawiki service</li> <li>Install ~10 necessary wiki plugins (e.g. math formula renderer)</li> <li>Tweak mysql/apache/memcached performace</li> <li>Upload ~6.5M articles via wiki API</li> <li>Download rendered articles via wiki API</li> </ul> <p>Some insights:</p> <ul> <li>Final mysql DB size ~600GB (most likely due to diffrent kinds of indexes)</li> <li>If a page contains many macroses it may take up to 2 mins to render it first time. Once it's rendered, I suppose it's saved in cache and the next request renders it almost immediatly</li> <li>I achived speed of ~2k articles/min using 64C/256GB instance. So it took ~3 days to upload articles and another ~3 days to download</li> </ul> <p>Result:<br> <a href="https://postimg.cc/sBNrRKwQ" target="_blank"><img src="https://i.postimg.cc/G2chG5DQ/Screen-Shot-2023-10-12-at-00-34-01.png" alt="Screen-Shot-2023-10-12-at-00-34-01.png"></a></p> <h1>Embeds</h1> <p>~6.5M articles contains ~22M sections. It takes time calculate embeddings. I tried a few:</p> <table> <thead> <tr> <th>model</th> <th>map@100 on train</th> </tr> </thead> <tbody> <tr> <td>all-MiniLM-L6-v2</td> <td>0.670</td> </tr> <tr> <td>BAAI/bge-base-en</td> <td>0.706</td> </tr> <tr> <td>BAAI/bge-large-en</td> <td>0.727</td> </tr> </tbody> </table> <p>BAAI/bge-large-en embeds size of 22M sections ~ 50GB in fp16. </p> <h1>Pipeline</h1> <p><a href="https://postimg.cc/7bWcKfb9" target="_blank"><img src="https://i.postimg.cc/ZKbSKNSG/Screen-Shot-2023-10-12-at-01-12-25.png" alt="Screen-Shot-2023-10-12-at-01-12-25.png"></a></p> <h1>What would I do differently?</h1> <ul> <li>Invest more time on CV dataset</li> <li>Models diversity</li> <li>Start ensembling earlier</li> <li>Try different augumentations</li> </ul> <h1>What didn't work</h1> <ul> <li>Longformer model (0.864/0.851)</li> <li>MLM deberta fine-tuning on wikipedia texts </li> </ul> <h1>Optimization</h1> <ul> <li>inference 2xT4 deberta fp16</li> <li>similarity search 2xT4</li> <li>store embeds in fp16</li> <li>SentenceTransformers fp16 inference showed worse performance</li> </ul> <h1>Sources</h1> <ul> <li><a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/434913" target="_blank">https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/434913</a></li> <li><a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/436383" target="_blank">https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/436383</a></li> </ul>
Kaggle - LLM Science Exam
85th place solution; 0.906 public and private scores; our 2nd back to back medal; kaggle experts now !
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all thanks to Kaggle team for arranging such an amazing competition and congratulations to all the participants and winners.</p> <p>We have participated in back to back generative AI competitions in last 7-8 months. 1st one was based upon stable diffusion [<a href="https://www.kaggle.com/competitions/stable-diffusion-image-to-prompts" target="_blank">https://www.kaggle.com/competitions/stable-diffusion-image-to-prompts</a>] where we earned a bronze medal and in this one we earned a silver medal. <br> More importantly, it has been a great learning experience from so many inspiring kagglers. Next landmark is Kaggle master 😉. That path will be quite tough but the biggest gain will be the learnings irrespective of where we reach. </p> <p>Our solution: <br> Training:<br> We mainly trained deberta v3 large models (all layers were fine tuned) with context lengths 512, 768 and 1024 on [<a href="https://www.kaggle.com/datasets/cdeotte/60k-data-with-context-v2" target="_blank">https://www.kaggle.com/datasets/cdeotte/60k-data-with-context-v2</a>] and also on some random samples from [<a href="https://www.kaggle.com/datasets/cdeotte/40k-data-with-context-v2" target="_blank">https://www.kaggle.com/datasets/cdeotte/40k-data-with-context-v2</a>]<br> Hyperparams:<br> NUM_TRAIN_EPOCHS = 5<br> LEARNING_RATE = 2e-6<br> GLOBAL_BATCH_SIZE = 32<br> PRECISION = BF16<br> For context length=1024, we had to increase the learning rate to 1e-5 and managed to train it for ~3 epochs.</p> <p>Inference:<br> m1 - deberta trained with context len 512<br> m2 - deberta trained with context len 768<br> m3 - deberta trained with context len 1024 </p> <p>1st Ensembling on wiki 6M RAG: <br> FILTER_LEN = 10<br> IND_SEARCH = 20<br> NUM_SENTENCES_INCLUDE = 20<br> CONTEXT_LEN = 3000 <br> predictions_ensemble_1 = m1 * 0.2 + m2 * 0.6 + m3 * 0.2</p> <p>2nd Ensembling on STEM 270K PARSED and UNPARSED:<br> We made some updates to the tokenizer, maximum length and dataset creation logic here. <br> predictions_ensemble_2 = m1 * 0.2 + m2 * 0.6 + m3 * 0.2</p> <p>final_ensemble = predictions_ensemble_1 * 0.2 + predictions_ensemble_2 * 0.8</p> <p>Inference notebook: <br> [<a href="https://www.kaggle.com/code/puneeshk/final-inference-notebook" target="_blank">https://www.kaggle.com/code/puneeshk/final-inference-notebook</a>] <br> This will get to 0.908 public and private scores and around 55th rank but we could not solve the OOM error in this notebook within the deadline time and ran out of submissions.</p> <p>We are ensuring that we are upvoting all discussions and notebooks that really helped us in this competition and hope to be more proactive in the discussions in our next competitions. Also we are planning to conduct more research on this competition dataset and problem and try to come up with a whitepaper.</p> <p>Our linkedin profiles:<br> <a href="https://www.linkedin.com/in/puneeshkhanna/" target="_blank">https://www.linkedin.com/in/puneeshkhanna/</a><br> <a href="https://www.linkedin.com/in/ankur-shukla-3144906b/" target="_blank">https://www.linkedin.com/in/ankur-shukla-3144906b/</a></p> <p>Best Regards,<br> Puneesh Khanna / Ankur Shukla</p> <p>thank you <a href="https://www.kaggle.com/shivamtaneja2304" target="_blank">@shivamtaneja2304</a> </p>
Kaggle - LLM Science Exam
From nothing to something... a 103 LB jump!
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Cheers <a href="https://www.kaggle.com/jjinho" target="_blank">@jjinho</a>. I just read your writeup on a cleaner wiki dump and honestly, that is so useful. Thanks so much for that and all your contributions here.</p> <p><a href="https://www.kaggle.com/simonveitner" target="_blank">@simonveitner</a> Thank you, and congrats to you for your silver! Very deserved, and also appreciate all your help on the forums :)</p> <p>Cheers <a href="https://www.kaggle.com/shivamtaneja2304" target="_blank">@shivamtaneja2304</a>, and well done to you as well!</p> <p>Congratulations to all winners!</p> <p>Just like the title says, I was quite surprised to find that although for the whole of the last 2 weeks, I slipped out of bronze into the 300s in ranking, when the private LB was released, I found that I made the cut after all. I ended up with a medal (and a 197 rank)</p> <p>But in all honesty, my primary reason for joining the comp was to learn more about the LLM space. I knew there was so much progress being made over the past few months in this area, and I had no clue how or where to start. So firstly, very grateful to kaggle for this competition!</p> <p>Regarding my solution, here is a summary:</p> <ul> <li>Uses <a href="https://www.kaggle.com/mbanaei" target="_blank">@mbanaei</a>'s amazing notebook for wiki context retrieval. I modified it to use 14 chunks and 10 top queries, and also fit the tfidf on a larger fraction of the text corpus</li> <li>Made a custom wiki index and custom wiki embeddings, and then use faiss to search the index for relevant articles. The embeddings and index were made on the title and 2 paragraphs for all pages.</li> <li>I then use a single deberta model fine-tuned on a subset of the 60k data shared by <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> + a custom dataset generated by gpt3.5</li> <li>I then run the model on the parsed and cohere contexts of the 270k wiki data, and if the ensemble has a top probability score higher that=n 0.41, I use the prediction, else I use the prediction from the dame model using my custom 6M wiki embeddings</li> </ul> <p>Approaches I tried but failed to capitalise on:</p> <ul> <li>4 different trained models, each having predictions from 3 different contexts (parsed, cohere and 6M wiki), and then running an extensive dynamic weighted voting ensemble. This got quite complex in the end and gave me a private and public LB score pof 0.892. I know I could have improved but I did not have the time to work on this.</li> <li>A key aspect that I realised really late into the comp is that I should have experimented on more models for context retrieval, as I see that played a significant role.</li> </ul> <p>But there's so much I've learned from this comp, and I really am grateful for the cimmunity for sharing so much on here. I'm sure I'm gonna be visiting back now and then to read up/refresh my memory on some LLM related concept discussed here.</p> <p>Shoutout to <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>, <a href="https://www.kaggle.com/radek1" target="_blank">@radek1</a>, <a href="https://www.kaggle.com/nlztrk" target="_blank">@nlztrk</a>, <a href="https://www.kaggle.com/simjeg" target="_blank">@simjeg</a>, <a href="https://www.kaggle.com/cpmp" target="_blank">@cpmp</a> (congrats on your huge private LB jump!), <a href="https://www.kaggle.com/mbanaei" target="_blank">@mbanaei</a>, <a href="https://www.kaggle.com/mgoksu" target="_blank">@mgoksu</a>, <a href="https://www.kaggle.com/serjhenrique" target="_blank">@serjhenrique</a>, <a href="https://www.kaggle.com/jjinho" target="_blank">@jjinho</a>, <a href="https://www.kaggle.com/simonveitner" target="_blank">@simonveitner</a>, <a href="https://www.kaggle.com/nbroad" target="_blank">@nbroad</a> and so many more contestants for being super helpful and active on the discussion forums. Until next time (or maybe season 2….?)</p>
Kaggle - LLM Science Exam
How the 300 validation dataset is generated & 57th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks for the feedback!</p> <p>Glad to know that the dataset helped!</p> <h2>How the 300 validation dataset is generated</h2> <p>The <a href="https://www.kaggle.com/datasets/yalickj/dataset-wiki-new-1" target="_blank">300</a> validation dataset was generated by picking interesting contexts from the <a href="https://www.kaggle.com/datasets/nbroad/wiki-20220301-en-sci" target="_blank">130k stem dataset</a> and feeding prompts to GPT-3.5-Turbo. The distribution of topics is quite random with a focus on physics and biology (this competition has really been a good learning experience as I dived into both LLM and interesting science topics at the same time!)</p> <p>I wasn't able to check every question and it seems that some of the samples are flawed (e.g., incorrect answer, ambiguous question/options, more than one correct answers, no correct answers). Many thanks to <a href="https://www.kaggle.com/wuwenmin" target="_blank">@wuwenmin</a> for correcting some of the questions in <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/444933" target="_blank">300 Validation Samples Corrected by GPT4 </a>!</p> <p>The prompt used for the generation is as follows:</p> <pre><code>You are a professional multiple-choice question generation robot. When generating multiple-choice questions, you make sure that the choices are balanced too short, so that the questions are quite hard to answer. Please choose the most suitable question the : [, , , , , , , , , ] Please read the following material generate a multiple-choice question five options marked A, B, C, D, E, respecively, only one of the five correct the other four options are incorrect. Provide the correct answer after the question choices. Create choices that are long enough but vary length (e.g., more than words, but can be even longer, some of the answers can be shorter). The beginning phrases of the choices are generally identical so that we know the choices are on the same topic, but this a strict rule. It desirable to randomly asign the choices. Material: { } </code></pre> <h2>57th place solution</h2> <h3>Retrieval</h3> <ul> <li>The retrieval method is mainly a combination of two pipelines:<ul> <li>Encoding text sections split by '\n' from a 435k title dataset (270k + 130k + an additional set of titles found by k-means) with all-MiniLM-L6-v2 and building a flat faiss-gpu index. Retrieve at most 66 sections and keep the ones with distance &lt;0.9.</li> <li>Use <a href="https://www.kaggle.com/mbanaei" target="_blank">@mbanaei</a> 's public global tf-idf retrieval pipeline and keep the top 4 and keep the ones with similarity &gt;0.25.</li> <li>Combine the two groups of retrieved sections above and re-rank them with another tf-idf vectorizer with ngram_range=(1, 4). Keep the top 22 sections after re-ranking and filter out the ones with less or equal to 6 words.</li></ul></li> <li>To construct the prompt, keep the first 2300 characters and combine it with the question and one of the options.</li> </ul> <h3>Model</h3> <ul> <li>An ensemble of public and private DeBERTa-v3-large models. The ones I trained mainly follow <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> 's pipeline with 60k dataset with roughly 6k filtered out by tf-idf.</li> <li>Validation is on the 500 dataset (which is indeed less effective when the score is higher than 0.89 level).</li> </ul> <h3>Other stuff</h3> <ul> <li>Adding 'sublinear_tf=True' to all of the tf-idfs gives a 0.003 boost on private LB but a 0.004 drop on public LB.</li> </ul> <p>Thanks for reminding, edited</p> <p>I edited after this reminder 😀</p> <p>very informative 👍</p>
Kaggle - LLM Science Exam
Kaggle LLMSE 86th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Introduction</h1> <p>First of all, we would like to express our sincere appreciation to the competition organizers and competitors who shared their valuable thoughts and resources during the competition.</p> <p>Before starting, feel free to take a quick look at our <strong><a href="https://www.kaggle.com/code/hqfang/kaggle-llmse-inference" target="_blank">notebook</a></strong> solution. I would also like to introduce our team members: <strong><a href="https://www.kaggle.com/yuekaixueirc" target="_blank">@yuekaixueirc</a></strong>, <strong><a href="https://www.kaggle.com/lindseywei" target="_blank">@lindseywei</a></strong>, and <strong><a href="https://www.kaggle.com/hqfang" target="_blank">@hqfang</a></strong>.</p> <p>Our team's solution started from <a href="https://www.kaggle.com/mbanaei" target="_blank">@mbanaei</a>'s <a href="https://www.kaggle.com/code/mbanaei/86-2-with-only-270k-articles" target="_blank">notebook</a>. We made changes in three different parts: <strong>Context Retrieval</strong>, <strong>Model Inference</strong>, and <strong>Ensemble Models</strong>.</p> <p><br></p> <h1>Context Retrieval</h1> <p>We kept the original RAG method in the notebook. On top of that, we borrowed one more context source from <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>'s <a href="https://www.kaggle.com/code/cdeotte/how-to-train-open-book-model-part-2" target="_blank">notebook</a>. Moreover, when retrieving relevant contexts using prompt and options, we found that <a href="https://www.kaggle.com/mbanaei" target="_blank">@mbanaei</a>'s way of weighting the prompt by repeating it three times worked better, so we did it the same way for the context retrieval we added, that is:</p> <pre><code>trn = trn(lambda x: (, x, x, x, x]), axis=) trn = trn + + trn + + trn + + trn </code></pre> <p>We also found that <a href="https://www.kaggle.com/mbanaei" target="_blank">@mbanaei</a>'s way of reversing the order of relevant context to make relevant contexts closer to the prompt and options also useful, so we did it the same way for the context retrieval we added, that is:</p> <pre><code>contexts = r (trn(), total=(trn)): prompt_id = r prompt_indices = processed_wiki_text_data(wikipedia_file_data==prompt_id].values)] prompt_indices &gt; : prompt_index = faiss(wiki_data_embeddings, ) prompt_index(wiki_data_embeddings) context = context_temp = ss, ii = prompt_index(question_embeddings, NUM_SENTENCES_INCLUDE) _s, _i (ss, ii): context_temp(processed_wiki_text_data) context_temp() ((context_temp)): context += context_temp + contexts(context) contexts_wiki = contexts contexts gc() </code></pre> <p>We also changed the variable <code>NUM_SENTENCES_INCLUDE</code> in the added retrieval to 15 in order to make our contexts have less irrelevant information.</p> <p>Also, Inspired by <a href="https://www.kaggle.com/simjeg" target="_blank">@simjeg</a>'s <a href="https://www.kaggle.com/code/simjeg/platypus2-70b-with-wikipedia-rag" target="_blank">notebook</a>, we made use of the variable <code>IS_TEST_SET</code> to save local run time for context retrieval.</p> <h1>Model Inference</h1> <p>Instead of using a Longformer, we chose to use DeBERTa as it outperformed the Longformer in our experiments. Considering to add more diversity to the inference, also inspired by <a href="https://www.kaggle.com/itsuki9180" target="_blank">@itsuki9180</a>'s <a href="https://www.kaggle.com/code/itsuki9180/llm-sciex-optimise-ensemble-weights" target="_blank">notebook</a>, we decided to use both OpenBook models and non-OpenBook models.</p> <p>Among the OpenBook models, we used three DeBERTas trained locally by <a href="https://www.kaggle.com/yuekaixueirc" target="_blank">@yuekaixueirc</a> primarily using <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>'s <a href="https://www.kaggle.com/code/cdeotte/how-to-train-open-book-model-part-1" target="_blank">notebook</a>. Among the non-OpenBook models, we used one DeBERTa trained locally by <a href="https://www.kaggle.com/hqfang" target="_blank">@hqfang</a> primarily using <a href="https://www.kaggle.com/radek1" target="_blank">@radek1</a>'s <a href="https://www.kaggle.com/code/radek1/new-dataset-deberta-v3-large-training" target="_blank">notebook</a>, one DeBERTa trained locally by <a href="https://www.kaggle.com/lindseywei" target="_blank">@lindseywei</a> using the <a href="https://www.kaggle.com/code/datafan07/single-model-rewardtrainer-lora-llm/notebook" target="_blank">LoRA</a> technique, and one DeBERTa posted publicly by <a href="https://www.kaggle.com/itsuki9180" target="_blank">@itsuki9180</a> using the <a href="https://www.kaggle.com/code/itsuki9180/introducing-adversarial-weight-perturbation-awp" target="_blank">AWP</a> technique.</p> <p>All of the models we trained are posted publicly in this <strong><a href="https://www.kaggle.com/datasets/hqfang/kaggle-llmse-dataset" target="_blank">dataset</a></strong>.</p> <p>For the inference part of the OpenBook models, we changed the way of tokenizing samples to the one we used for training, that is:</p> <pre><code>def prepare_answering_input( tokenizer, question, options, context, =1024, ): first_sentence = [ + context ] * 5 second_sentences = [ + question + + options[option] + option range(0,5)] tokenized_examples = tokenizer( first_sentence, second_sentences, =max_seq_length, =, =, =, = ) input_ids = tokenized_examples[].unsqueeze(0) attention_mask = tokenized_examples[].unsqueeze(0) example_encoded = { : input_ids.(model.device.index), : attention_mask.(model.device.index), } return example_encoded </code></pre> <p>Note that we also changed the <code>max_seq_length</code> to 1024.</p> <p>We kept the inference code as it was, but changed some details to make it only output the probabilities for further use. What's more, we assigned weights of the three predictions of different contexts as 4:4:2. See more below:</p> <pre><code>def (model_dir): df_valid = pd.() trn2 = pd.() tokenizer = AutoTokenizer.(model_dir) model = AutoModelForMultipleChoice.(model_dir).() predictions = [] for index in ((trn2.shape[])): columns = df_valid.iloc[index].values question = columns[] options = [columns[], columns[], columns[], columns[], columns[]] context1 = trn2[][index] context2 = trn2[][index] context3 = trn2[][index] inputs1 = ( tokenizer=tokenizer, question=question, options=options, context=context1, ) inputs2 = ( tokenizer=tokenizer, question=question, options=options, context=context2, ) inputs3 = ( tokenizer=tokenizer, question=question, options=options, context=context3, ) with torch.(): outputs1 = (**inputs1) losses1 = -outputs1.logits[].().().() probability1 = torch.(torch.(-losses1), dim=-) with torch.(): outputs2 = (**inputs2) losses2 = -outputs2.logits[].().().() probability2 = torch.(torch.(-losses2), dim=-) with torch.(): outputs3 = (**inputs3) losses3 = -outputs3.logits[].().().() probability3 = torch.(torch.(-losses3), dim=-) probability_ = probability1 * + probability2 * + probability3 * predictions.(probability_.()) predictions = np.(predictions) return predictions </code></pre> <p>Also, Inspired by <a href="https://www.kaggle.com/simjeg" target="_blank">@simjeg</a>'s <a href="https://www.kaggle.com/code/simjeg/platypus2-70b-with-wikipedia-rag" target="_blank">notebook</a> again, we made use of the variable <code>IS_TEST_SET</code> to save GPU run time for OpenBook model inference.</p> <p><br></p> <h1>Ensemble Models</h1> <p>To avoid overfitting the public LB, we simply took the average of the three OpenBook models. When ensembling the three non-OpenBook models, we assigned specific weights we derived from previous experience when playing with <a href="https://www.kaggle.com/itsuki9180" target="_blank">@itsuki9180</a>'s <a href="https://www.kaggle.com/code/itsuki9180/llm-sciex-optimise-ensemble-weights" target="_blank">notebook</a>. After that, we assigned the OpenBook models to take 90% of the total prediction, while the non-OpenBook models could only take 10%. Finally we can ensemble the inference using the weights below:</p> <pre><code> = = np.array(ws) = . / . = . / . [] = ws[] * openbook_w [] = ws[] * openbook_w [] = ws[] * openbook_w [] = ws[] * other_w [] = ws[] * other_w [] = ws[] * other_w = deberta_ob_preds_eric_0897 * ws[] + deberta_ob_preds_eric_088 * ws[] + deberta_ob_preds_eric_0916 * ws[] + deberta_preds_billy_v1 * ws[] + deberta_awp_preds_itk * ws[] + deberta_lora_preds_lindsey * ws[] </code></pre> <p><br></p> <h1>Conclusion</h1> <p>In summary, what we mainly did in this competition was trying different kinds of ensembles, including ensembling different contexts and models. In those ensembles, we mainly contributed to training different models that were better than the ones posted publicly, and we borrowed those useful RAGs and brought them together with some fine-tuning. We were deeply surprised by the amazing open-source environment which helped everyone thrive in this competition. </p> <p>Finally, thanks again to all the competitors who shared those invaluable ideas that we could work on. We wouldn't get to this position without their effort. </p> <p><br></p> <p><em>P.S. Our score of 0.905 on the PB was achieved by a solution that did not use non-OpenBook models, it maintained the same weights for contexts ensemble but took the average of the three OpenBook models. However, the approach we posted here could achieve a score of 0.906, which is the best score on PB among all of our submissions. Hence we chose to post the best one publicly. Also, the score of 0.915 on the LB was achieved by a solution that is exactly the same as the one posted here, where we only changed to take the average of the three contexts.</em></p>
Kaggle - LLM Science Exam
Public Top3 -> Private Top15 Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>What a big surprise! But we still lost the $10000 prize. It looks like overfit. From Top3 shake to Top15.<br> To be honest, we focus our all attention on RAG,and just use 2 deberta to do MutilChoice. So the overfitting problem may be in the deberta(not like other's LLM-7B).<br> My English is not very good, so some of the content is translated from Chinese using a translator。🤣🤣🤣🤣<br> We have a 5-way recall strategy. It can be divided into 2 parts, Sentence Model or TFIDF.<br> <strong>Here's a flowchart of our overall program.</strong><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F7722685%2F597bed18f695c4a355f6a88029836607%2F.png?generation=1697180108921230&amp;alt=media" alt=""><br> <strong>Part1. Sentence Model:</strong><br> We use simcse to train our Sentence Model with SFT, it improves us LB 0.015. <strong>And specially, We use a trick called Difficult Sample Comparison Learning.</strong> We train our Simcse firstly, and use this model to inference our train dataset. This will produce Top5 recall, then we put Top4(except the ground truth) to Simcse Loss as negative label. This will give our model a stronger textual representation. And this gives our model an additional 0.005 LB lift over normal Simcse.<br> <strong>Note that all the Sentence Models we use are trained with simcse</strong><br> <strong>Part2. TFIDF</strong><br> Just like open-source notebook. <strong>And specially, we did a speedup on TFIDF and it only took 20min.</strong><br> <strong>In particular, since we found that the sentence-model cannot encode all the text of a wiki article, TFIDF or BM25 or LGBRanker are several effective complementary schemes to it, since they can all encode to all lengths of text.</strong><br> Then we have 5-way recall strategy.<br> 1.Dataset 6800K wikipedia. We use sentence model to recall Top1000 wiki articles. And we use Bm25(or a LightGBM Ranker) to reorder the Top1000 articles, only remain Top30 articles. Then split the articles and use our sentence model to get Top20 sentence. This way's LB score is 0.885.<br> 2.Dataset 270K wikipedia(2100K paragraphs). We use sentence model to recall Top5 paragraphs. <br> 3.Dataset 270K wikipedia(2100K paragraphs). We use TFIDF to recall Top8 paragraphs. <br> 4.Dataset 270K wikipedia(2800K paragraphs). We use sentence model to recall Top5 paragraphs. <br> 5.Dataset 270K wikipedia(2800K paragraphs). We use TFIDF to recall Top8 paragraphs.<br> Interestingly, we slice the dataset into multiple indexes and then recall them individually and then sort them according to search_score to get Top5. this solves the problem of faiss taking up too much memory. 🤣🤣🤣🤣🤣🤣<br> <strong>Some miscellany</strong><br> <strong>This is a Discussion Competition!</strong><br> It was a very competitive competition. But there was also a lot of great open source work that completely changed the competition. MB's 270K dataset is undoubtedly excellent work, but it also completely disrupted the game in the last half. After that, all the trick can be found in DISCUSSION.<br> Thx to <a href="https://www.kaggle.com/code/mbanaei/86-2-with-only-270k-articles" target="_blank">MB's notebook</a>, this leads us to fusion on multiple datasets.<br> Thx to <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/442595#2462052" target="_blank">william.wu</a>, this leads us to fusion on Faiss and TFIDF.<br> Thx to <a href="https://www.kaggle.com/code/mgoksu/0-807-sharing-my-trained-with-context-model" target="_blank">MGÖKSU</a>, it is a wonderful baseline!<br> This is the first kaggle competition I've been seriously involved in, thanks to my teammates and the organizers, and thanks to everyone for the open source work!<br> <strong>Life is full of surprise!</strong></p> <p>search_score, search_index = sentence_index.search(embed, 1000)</p> <p>His notebook use two different datasets, One contains 2.1 million paragraphs and the other contains 2.8 million paragraphs.These two datasets do not overlap each other and are both very valid.</p> <p>We just use about 80k train dataset to SFT our simcse. The code will be open-sourced on github when I'm done with it.</p> <p>I simply averaged them, which resulted in the recalled text on multiple datasets being weighted unevenly in the final scheme. For example, the 6800K data is only 1/5th of the total. so the SHAKE DOWN could also be due to underutilization of the recall text.</p> <p>At the same time, the high fitting ability of deberta may lead to different extremes of probability distributions under different recall text inputs, while the LLM-7B model has better robustness. As for the shake of 70B model, I prefer to think that it is the time limitation, it is difficult to do the fusion of multiple recalled texts, so it also leads to the shake.</p>
Kaggle - LLM Science Exam
Top 100 Solution - Fast RAPIDS TF-IDF RAG - 2xT4 GPU Acceleration!
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks Behrad!</p> <p>Congratulations Ruslan on strong solo silver finish!</p> <p>Thanks no fit just luck!</p> <p>Thanks Jude. Congratulations on solo bronze in a tough competition where going solo was difficult.</p> <p>Thank you JJ. Thanks for sharing the OpenBook technique </p> <p>Good suggestion. I did not try too much more with DeBERTa training (besides using different subsets of my 60k+40k+99k). I came to the conclusion that my trained DeBERTa is basically as good as it gets. And then i focused on creating more RAG pipelines.</p> <p>I am curious though. What it the best LB score that can be obtained using my RAPIDS TF-IDF notebook and single model DeBERTa? I achieve LB 0.900. Is it possible to use single DeBERTa (with RAG pipeline as is without changing it) and achieve LB 0.910 or LB 0.920? I'm very curious about this!</p> <p>Thanks for the tips, I'll experiment with these ideas.</p> <p><strong>UPDATE:</strong> After competition ended, I learned about the TF-IDF parameter <code>sublinear_tf=True</code> (<a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/446558" target="_blank">here</a> and <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/446693" target="_blank">here</a>). This boost CV and LB by an amazing <code>+0.010</code> WOW! I updated notebook version 2 <a href="https://www.kaggle.com/code/cdeotte/rapids-tf-idf-lb-0-904" target="_blank">here</a> with this parameter.</p> <p>Thank you Kagggggllllle</p> <p>Yes, i think he means the test data. Using multiple GPU for inference is easy because we just need to split the inference data (i.e. the test data) into parts. Then infer each part in its own thread on its own GPU in parallel. </p> <p>Training with multiple GPU is more difficult than inference because we need to combine losses before applying back propagation. Therefore training on multiple GPU requires sophisticated libraries. Whereas inference we can just write the code ourselves.</p> <p>Yes. I also observed <strong>Choice Permute TTA</strong>. During inference, we can extract context using <code>question + choice A + choice B + etc</code>. Then infer this context with DeBERTa. Next we can extract context using <code>question + choice E + choice B + etc</code> where we randomly permute the choices. When using sentence transformers, this will extract different context. Ensembling the two TTA boost CV and LB.</p> <p>Awesome competition! Thanks everyone for sharing many wonderful discussions and notebooks. This competition was so much fun. I'm ready for another LLM QA comp already!</p> <h1>RAG Ensemble</h1> <p>Early in the competition, I jumped into gold using a single DeBERTa-v3-large (trained on my 60k dataset <a href="https://www.kaggle.com/datasets/cdeotte/60k-data-with-context-v2" target="_blank">here</a>) and ensemble multiple RAG pipelines. Each RAG pipeline extracts context. Then we infer each <code>context + question + choices</code> with DeBERTa-v3-large. And finally ensemble the sets of logits and make predictions:</p> <p><img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Sep-2023/rag-ensemble2.png" alt=""></p> <h1>GPU Speed Up</h1> <p>I was surprised to discover that each new RAG pipeline I added above improved my CV and LB more than adding additional DeBERTa. Therefore I focused on accelerating the speed of RAG pipeline + DeBERTa inference (with the goal of ensemble in mind). I created lots of diverse RAG pipelines and I accelerated RAG with:</p> <ul> <li>Use <strong>GPU</strong> for Faiss</li> <li>Use <strong>GPU</strong> for TF-IDF with <strong>RAPIDS</strong></li> <li>Use <strong>GPU</strong> (and fp16 on 2xT4) for Sentence Transformers</li> </ul> <p>Regarding DeBERTa, with 2xT4 GPU I was able to infer DeBERTa <strong>4x faster</strong> than using 1xP100 GPU. This was accomplished using </p> <ul> <li><code>from torch.cuda.amp import autocast</code> (Note fp16 only speeds up T4 not P100 GPU) for 2x speed</li> <li><code>import threading</code> where we infer half the test samples on each T4 GPU for 2x speed</li> </ul> <h1>Example Notebook</h1> <p>Example notebook with GPU Speed Up is <a href="https://www.kaggle.com/code/cdeotte/rapids-tf-idf-lb-0-900" target="_blank">here</a>. Submit notebook single model takes only <strong>30 minutes</strong> and achieves <strong>LB 0.900</strong>!<br> <strong>UPDATE:</strong> After competition ended, I learned about the TF-IDF parameter <code>sublinear_tf=True</code> (<a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/446558" target="_blank">here</a> and <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/446693" target="_blank">here</a>). This boost CV and LB by an amazing <code>+0.010</code> WOW! I updated notebook version 2 with this parameter.</p> <h1>Diverse RAGs</h1> <p>The goal of RAG is to search Wikipedia and find text context. My final ensemble used the following diverse RAGs:</p> <ul> <li>Find top 5 articles out of 6M Wikipedia using title, key words, first paragraph. Then extract top 20 sentences.</li> <li>Find top 5 sections out of 128k STEM Wikipedia. Then extract top 20 sentences</li> <li>Find top 5 chunks out of 12M Wikipedia 512 token chunks. Then extract 20 sentences</li> <li>Use Question plus Choices to search Wikipedia.</li> <li>Use Question without Choices to search Wikipedia</li> <li>Use Choices without Questions to search Wikipedia</li> <li>Find top 5 sections out of 271k Wikipedia STEM using <strong>RAPIDs TF-IDF</strong>. Then use paragraphs</li> <li>Find top 5 sections out of 271k Wikipedia STEM using Sentence Transformers. Then use paragraphs.</li> </ul> <h1>Memory Error</h1> <p>Unfortunately I could only ensemble 7 of my dozens of RAG pipelines before receiving OOM errors. This ensemble submit took only 6 hours to infer 7x Wikipedia RAG and 7x three DeBERTa. If I could solve the OOM errors then I could infer more RAG + DeBERTa pipelines in the remaining 3 hours and boost CV and LB. (In retrospect, perhaps I should have focused on improved RAG quality over RAG quantity).</p> <h1>Question Drop Out Augmentation</h1> <p>My best DeBERTa-v3-large was trained on my 60k dataset and used <strong>Question Drop Out Augmentation</strong>. During training, samples typically use <code>Question + 5 Choices</code> to locate <code>context</code>. Then the DeBERTa is trained with <code>context + question + 5 choices</code>. The idea of <strong>Question Drop Out Augmentation</strong> is that randomly 50%, we only use <code>5 Choices</code> to locate <code>context</code>. And the DeBERTa is trained with <code>context + 5 choices</code> without seeing the question.</p> <p>Using <strong>Question Drop Out Augmentation</strong> achieved my best <code>MAP@3</code> loss for 200 train sample of <code>0.470</code>. This single model achieved <code>LB 0.904</code> in notebook <a href="https://www.kaggle.com/code/cdeotte/rapids-tf-idf-lb-0-900" target="_blank">here</a>.</p> <h1>Choice Permute TTA</h1> <p>Another interesting technique that I observed was <strong>Choice Permute TTA</strong>. When extracting context we can concatenate <code>question + choice A, B, C, D, E</code>. Using sentence transformers, this gets context. If we use concatenate <code>question + choice E, B, A, C, D</code> where we randomly permute the choices, then sentence transformer extracts different context. If we infer both context with DeBERTa and ensemble the two sets of logits, we observe a boost in CV LB!</p> <h1>Another 1.6x Speed Up Trick!</h1> <p>We can achieve another 1.6x Speed Up for a total of <strong>7x speed up</strong> when comparing 2xT4 vs. 1xP100 using <strong>Drop 2 Wrong Choice</strong>. After we infer a few DeBERTa, we can use the result to determine the 2 wrong choices for each test question. Then we can infer only the 3 best choices for a <code>5/3 = 1.6x</code> speed up. (i.e. we only generate and infer tokens for top 3 choices). We set the prediction for unused choice 4 and 5 as the same logit as choice 3. Then we have 5 logits and we can ensemble this with our previous inference.</p> <h1>My 60k, 40k, and 99k Datasets!</h1> <p>Thank you Kagglers for sharing so many wonderful train datasets. To train OpenBook QA DeBERTa-v3-large, I added context to every public dataset and then published the resultant data with context column added in my three Kaggle datasets. Discussions <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/436383" target="_blank">here</a>, <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/440908" target="_blank">here</a>, and <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/444202" target="_blank">here</a>. I trained many DeBERTa-v3-large models using all sorts of combinations of these three datasets in search for the best train procedure and data! However the success of my final solution was a result of having many diverse RAG pipelines and not because of having many (or a great) DeBERTa-v3-large.</p> <p>Nice. Congratulations on your great solo finish. Well done!</p> <p>Thanks for the code. I will experiment with <code>trainer</code> tomorrow and compare speed. It looks like you configured trainer to use fp16 and I believe trainer uses 2xGPU. I'm curious if trainer splits half test samples on one GPU and half samples on the second GPU (which is fastest). Or whether it attempts to use 2x GPU to infer each test sample.</p> <p>Thanks CPMP!</p> <p>Great question. I trained over 100 DeBERTa-v3-large models!</p> <ul> <li>In beginning of comp, I trained using only <a href="https://www.kaggle.com/datasets/cdeotte/60k-data-with-context-v2" target="_blank">60k</a>. Later I use more…</li> <li>Train DeBERTa using only RACE (from my 99k dataset <a href="https://www.kaggle.com/datasets/cdeotte/99k-data-with-context-v2" target="_blank">here</a>)</li> <li>Pretrain RACE (<a href="https://www.kaggle.com/datasets/cdeotte/99k-data-with-context-v2" target="_blank">99k</a>) + MMLU (<a href="https://www.kaggle.com/datasets/cdeotte/40k-data-with-context-v2" target="_blank">40k</a>). Then finetune on my <a href="https://www.kaggle.com/datasets/cdeotte/60k-data-with-context-v2" target="_blank">60k</a>.</li> <li>Train using <a href="https://www.kaggle.com/datasets/cdeotte/40k-data-with-context-v2" target="_blank">40k</a> + <a href="https://www.kaggle.com/datasets/cdeotte/60k-data-with-context-v2" target="_blank">60k</a> + <a href="https://www.kaggle.com/datasets/cdeotte/99k-data-with-context-v2" target="_blank">99k</a>.</li> <li>Pick various subsets of <a href="https://www.kaggle.com/datasets/cdeotte/40k-data-with-context-v2" target="_blank">40k</a> + <a href="https://www.kaggle.com/datasets/cdeotte/60k-data-with-context-v2" target="_blank">60k</a> + <a href="https://www.kaggle.com/datasets/cdeotte/99k-data-with-context-v2" target="_blank">99k</a>. Then train DeBERTa</li> <li>Lots of tricks to deal with different number of choices 3 or 4 or 5.</li> <li>Multi-class and Multi-label DeBERTa, representing choices differently.</li> <li>etc etc</li> </ul> <p>The reason I did not use more of my DeBERTa's is that I ran out of submissions to include them. I was on vacation for 2 weeks in the final 3 weeks of comp so I had limited time. In the last week about 20 out of my 35 submissions resulted in OOM error. Here were my two goals for the last week but I did not complete goal 1 to start goal 2. Step 1 uses my 60k DeBERTa's that I trained in the beginning of the competition.</p> <ul> <li>First I wanted to submit all my different RAG pipelines in ensemble to find best combination</li> <li>Second I wanted to add and/or replace more of my DeBERTa-v3-large to the ensemble to find best.</li> </ul> <p>My three Kaggle datasets included all public datasets shared by all Kagglers (with added context). The most helpful datasets are most likely the 12 datasets contained in my 60k dataset's CSV file. (Also to further complicate things, in the last week of comp, an additional 70k dataset was shared <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/445438" target="_blank">here</a>).</p> <p>I'm curious how important the finetuned DeBERTa-v3-large is. And how much difference can be achieved improving it. My intuition is that better RAG is more important than better inference model. For example, if we had perfect RAG which displayed the correct answer each time, then even the simplest inference model (using zero shot) should be able to pick out the correct A B C D E that matches the perfect context.</p> <p>The slowest thing during inference is inferring DeBERTa-v3-large. One DeBERTa using fp32 and 1xGPU on 4000 test samples could take 80 minutes. So it is most important to speed this up. My sample notebook <a href="https://www.kaggle.com/code/cdeotte/rapids-tf-idf-lb-0-900" target="_blank">here</a> speeds up DeBERTa by 4x and infers 200 samples in 1 minute which is 4000 samples in 20 minutes.</p> <p>Regarding RAG, <code>faiss</code> can be moved to 1 or 2 GPU but I found using 1 GPU was faster with following code:</p> <pre><code> = read_index() = faiss.StandardGpuResources() = faiss.index_cpu_to_gpu(res, , sentence_index) </code></pre> <p>Using all GPU is <code>sentence_index = faiss.index_cpu_to_all_gpus(sentence_index)</code></p> <p>RAPIDS TF-IDF only uses 1 GPU and using 1 GPU is much faster than CPU TF-IDF</p> <p>Sentence Transformers are just models. So they can use fp16 and 2xT4 the same way as the code for DeBERTa above. However if you use a light small sentence transformer, I found just using 1xGPU and fp16 was nearly same speed as 2xGPU (because runtime is short anyway). For the larger sentence transformers, it would probably help to use 2xGPU.</p>
Kaggle - LLM Science Exam
[76th Place solution] - STEM WIKI Data filtering & Lesson learned points
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>In Our Notebook, it went up +0.006!<br> 0.010 is a pretty impactful number! 🤣</p> <p>Reducing the impact by writing log to the TF term is a little trick that I often used long before word2vec came along. </p> <p>I didn't expect TF-IDF to see the light of day in this competition, but I still think it's important to have a good mix of sparse and dense retrieval in the search problem. </p> <p>Thanks to the data shared by <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> , I think my team has made some good progress, so thank you very much!!</p> <h2>🔥 Entire Architecture 🔥</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2133178%2Fc6c3e5315813adc5b80ebe9d45723318%2FKakaoTalk_20231013_011613425.png?generation=1697128039286975&amp;alt=media" alt=""></p> <h2>Architecture in detail</h2> <p>Divided into parts 1 and 2. similar to <a href="https://www.kaggle.com/mbanaei" target="_blank">@mbanaei</a> <a href="https://www.kaggle.com/code/mbanaei/86-2-with-only-270k-articles" target="_blank">strategy.</a></p> <h3>Part 1</h3> <ol> <li>The <a href="https://www.kaggle.com/datasets/jjinho/wikipedia-20230701" target="_blank">6.2M wiki data</a> had data from various fields. And in the 200EA test.csv, the Science &amp; Mathematics and Computers &amp; Internet fields were more than 97% when we applied a <a href="https://huggingface.co/jonaskoenig/topic_classification_04" target="_blank">model to classify topics</a>. <br> Therefore, we reduced the number of wiki data to reduce the size of the index. (6.2M → 4.7M) → This method reduces the number of things to retrieve and construct new contexts. The intent is to utilize documents from the scientific domain as contexts whenever possible.</li> <li>In method 1, the index was constructed using the bge-small-1.5-en embedding model &amp; the faiss IndexFlatIP method.</li> <li>The models (weights) used in the ensemble are as follows : </li> </ol> <ul> <li>Model 1(0.32) : Modify the context in <a href="https://www.kaggle.com/datasets/cdeotte/60k-data-with-context-v2" target="_blank">60K Data</a>. On 60K data, make prompt as query, apply wiki data + BGE Large 1.5 model embedding model, configure the search results as new context and train with DeBERTa large.</li> <li>Model 2(0.28) : <a href="https://www.kaggle.com/datasets/cdeotte/60k-data-with-context-v2" target="_blank">60K Data</a> &amp; DeBERTa</li> <li>Model 3(0.28) : <a href="https://www.kaggle.com/datasets/cdeotte/99k-data-with-context-v2" target="_blank">Only RACE Data 97K</a> &amp; DeBERTa</li> <li>Model 4 (0.32) : <a href="https://www.kaggle.com/mgoksu" target="_blank">@mgoksu</a> 's <a href="https://www.kaggle.com/code/mgoksu/0-807-sharing-my-trained-with-context-model" target="_blank">Model</a></li> </ul> <h3>Part 2</h3> <ol> <li>We used 70K parsed data, 270k cohere data.</li> <li>When performing retrieval, we utilized tf-idf and applied bi-grams, and stopwords. We also utilized subliner_tf = True, which plays an important role in reducing the value of TF and increased performance.</li> <li>Our solution is similar to <a href="https://www.kaggle.com/code/mbanaei/86-2-with-only-270k-articles" target="_blank">MB's</a>. We didn't use a long-former. But an ensemble application, utilizing the same models we used in part 1.</li> <li>Adjusted for probability_.max() &gt; 0.45 in part1, part2 results. A higher prob value means trust the TF-IDF search result value. </li> </ol> <h2>😇 Lesson Learned Points 😇</h2> <ol> <li>We needed more validation data to make the model more performant and robust. I was too obsessed with 200 sample Data. </li> <li>Consider different metrics like MAP@1 as well as MAP@3 for the 200 data. </li> <li>In this competition, the embedding models on the <a href="https://huggingface.co/spaces/mteb/leaderboard" target="_blank">MTEB leaderboard</a> were very similar in terms of experience, and we spent a lot of time obsessing over them.</li> <li>I have participated in fewer kaggle competitions and this was the first time I saw the shake-up phenomenon. I realized that the lower the percentage of test datasets used in the public LB, the more dangerous it is. Our team was obsessed with the public LB, which was not a good thing 😭</li> <li>I never thought of the chunk, window sentence, slide sentence method in wiki data, it's really basic. But after running the contest for so long, I didn't have any fresh ideas and kept sticking to the way everyone else was doing it.</li> <li>With 1 week left in the competition, and over 9 hours of OOM and inference time since submission, the stress was considerable.</li> <li>I was surprised to see that I could get good enough results without using a 7B+ model. But I realized that to get a gold medal, I needed a more novel method than the one published on kaggle. Gold medals are too hard 🫥<br> (I envision myself winning a gold medal in 2024!!)</li> </ol> <h2>Closing</h2> <p>Thanks also to all the kagglers who shared their ideas and code during the competition. <br> Also, sharing people's solutions after the competition is over seems like a great net function of kaggle. </p> <p>And thanks to our team members <a href="https://www.kaggle.com/yumyeom" target="_blank">@yumyeom</a> <a href="https://www.kaggle.com/jisukim8873" target="_blank">@jisukim8873</a> <a href="https://www.kaggle.com/limhaa" target="_blank">@limhaa</a> <a href="https://www.kaggle.com/shengshingyuan" target="_blank">@shengshingyuan</a> for their hard work.</p>
Kaggle - LLM Science Exam
[8th Public / 18th Private] - Full Wikipedia Passage-level Retrieval
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks Chris for the warm words! My whole DeBERTa training pipeline is actually copied from your shared notebook 😊 And my only 1 correctly trained model (where validation loss did not diverge early) was thanks to your comments about smaller LR in <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/440197" target="_blank">this thread</a> (I wish I had noticed that earlier and not on the last day)! Thanks for sharing your amazing skills and knowledge to everyone!</p> <p>PS: I actually did not want to go solo in this competition 😂 It's just… I was on the 190th place when the team merge deadline happened, so I was too shy to ask people to team up and so I went solo 😅</p> <p>Thanks! I think I kind of digged my own grave by compressing the index 😅 It's fast and nice, but 20 minutes (or more, I don't know) speedup does not worth the few points that I missed since the LB is so tight! Somehow the thoughts of using a full uncompressed index never crossed my mind though 😅 (did not know you can upload 80GBs of data to Kaggle)</p> <p>Thanks! I also thought about batched searching, but the thoughts of uploading 100+ GBs of embeddings to kaggle… does not excite me 😅 In my local testing on my own data, the IVF,PQ index has quite a good recall (0.89 at top100), so all I need is just to re-rank them. Of course FlatIP is always better, I guess my logic was that "the model will contain some knowledge from Wiki anyways so that should compensate". Maybe I'm wrong :)</p> <p>I would love to see your Lucene BM25 index code! I wanted to make a full BM25 index for Wikipedia, but couldn't find time to do (and I'm not sure what's the best way to do it).</p> <p>Congratulations to the top teams, especially to <a href="https://www.kaggle.com/philippsinger" target="_blank">@philippsinger</a>, <a href="https://www.kaggle.com/ilu000" target="_blank">@ilu000</a>, and <a href="https://www.kaggle.com/ybabakhin" target="_blank">@ybabakhin</a>! This was a fun competition and I learned a lot about training language models and effective inferencing for LLM on small GPUs. building the whole retrieval pipeline from scratch was a really fun exercise.</p> <h1>A few thoughts about this competition</h1> <p>This competition was an rollercoaster for me: got to top20 just 1 week after joining, then the STEM 270K dataset changed the LB completely (7 days before competition end I was on ~190th place in the LB), got busy with work and abandoned the competition for weeks, and finally bouncing back to top20 in the last few days was both stressful and exciting. Not to mention that my Azure account got banned 11 hours before the end of competition (forgot to increase budget limits), just when I was uploading data to Kaggle, so my latest models did not make it to my final submissions 😅</p> <p>At the end, the shakeup was super disappointing for me - after such an intense week without much sleep, I hoped for a better result. I'm glad I learned a lot by doing though 😁 My full inference code can be found here: <a href="https://www.kaggle.com/code/chankhavu/fork-of-0-92-submission" target="_blank">https://www.kaggle.com/code/chankhavu/fork-of-0-92-submission</a></p> <h1>My solution</h1> <p>My pipeline is quite standard - first, I retrieve relevant sentences from Wikipedia to add to the context for my final DeBERTa models. However, instead of retrieving articles and then sentences as in most public notebooks, I perform <strong>passage-level retrieval from full-text Wikipedia!</strong> This decision comes from the observation that, in the competition's 200 train set, there are many examples where the actual answer to the question can be found actually in a different, less related article. The re-ranking and prediction that comes after that are quite standard.</p> <p>The code of my final submission is here: <a href="https://www.kaggle.com/code/chankhavu/fork-of-0-92-submission" target="_blank">https://www.kaggle.com/code/chankhavu/fork-of-0-92-submission</a></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2270821%2F6ae277ca9e188913392575378a8ae114%2FPresentation1.png?generation=1696987202333885&amp;alt=media" alt=""></p> <h2>1. Full Passage-level Retrieval - 54.5 million vectors/passages</h2> <p>I split whole Wikipedia to passages of 128 tokens each (which is approx. <strong>100 words per passage</strong>), with a stride of 96 tokens (so, for example, an article after removing citations/categories/bibliography and after tokenization will be split into passages with offsets <code>{ [0, 127], [96, 224], ... }</code>. This results in a HUGE embeddings dataset of <strong>54.4 million vectors</strong>, each has dimension 768, and takes <strong>154 GBs</strong> on the disk. Obviously, it's too large for Kaggle. We need to compress this index to be usable on Kaggle.</p> <p>I've tried different combinations of FAISS indexes and quantization techniques - IVF, HNSW, Product Quantization (PQ) with OPQ preprocessing, Scalar Quantization (SQ), etc. I validate the indexes by generating questions with GPT-3.5, saving the Wikipedia article and passage that was used for generating the question, and measuring the recall of the index (if the retrieved passage is contained inside the passage used). The best and fastest combination is <strong>IVF with 256 centroids</strong>, and each vector is encoded into <strong>96 bytes with Product Quantization</strong>. Obviously, the more bytes in PQ the better, and 96 is the maximum possible PQ for GPU inference. The index is compressed to less than <strong>6 GBs</strong>, takes only ~20 seconds to perform a full search for 4000 questions (with <code>K=100</code> and <code>nprobe=96</code>)</p> <p>With <strong>only</strong> this index, I was able to achieve <strong>0.891 LB</strong> using just <strong>one public model</strong> (from Chris notebook), and this is before the STEM 270K dataset was published!</p> <h2>2. Science Articles Sub-Index - 7.6 million vectors/passages</h2> <p>I split the STEM 270K articles to passages of 128 tokens each and with stride 64 tokens. Because we have less vectors than the Full Wikipedia Passage-level Index, we can afford to use more precise indexes and get rid of the PQ quantization (that, in my opinion, is too aggressive). I ended up using <strong>IVF256</strong> and use a simple <strong>FP16</strong> quantization for this, allowing a more precise retrieval. This boosted my <strong>0.891</strong> LB solution to <strong>0.895</strong> - the boost is much less than other teams, which I think is an indicator that my full passage-level retrieval is quite powerful by itself.</p> <p><strong>12 hours before the end of the competition</strong>, I also added 300K more articles from Wikipedia (I did not scrape using the API, just added from the Huggingface dataset) that are related to Science. I got those 300K articles so by asking GPT-4 to generate top-level topics in Science (Physics, Chemistry, Biology, Engineering, etc.) in general, and did some clustering with <code>be-base</code> embeddings. In total, the Science sub-index has <strong>600K articles</strong> and <strong>7.6 million vectors</strong> (passages). This added a slight boost to both my public and private scores as well.</p> <h2>3. Re-ranking sentences</h2> <p>This part is quite straightforward. I split the passages to chunks of 3 sentences with stride of 1 sentences, and re-rank those chunks using <code>BAAI/bge-large-en-v1.5</code>. Additionally, I also calculate the BM25 retrieval score for each of the sentences (using the all retrieved sentences for all questions as corpus).</p> <h2>4. Training DeBERTa models</h2> <p>I trained 6 DeBERTa models, each with slightly different set of training data and context length. All those models were trained in the last 3 days of the competition using 1xA100 and 1x3090, and are likely under-trained because I did not have much time. A few interesting observations:</p> <ul> <li>Training data was generated using GPT-3.5, using different prompts to add diversity. I also used the 60k dataset by Chris Deotte. I also prompted the model to try to include more numbers, formulas, etc. to make the question harder (from a tokenizing standpoint).</li> <li><strong>Training on worse context gives better models!</strong> So I generated my training data with weaker version of my retrieval pipeline (using <code>bge-small</code>). The difference is ~0.005 on LB.</li> <li>This competition is about reading comprehension and unleashing the knowledge inside the LM. So, smaller <code>lr</code> and more Q/A examples works.</li> </ul> <h1>What did not work</h1> <ul> <li>I spent some time trying to train a re-ranking model with LightGBM/XGBoost using LambdaRank objectives and various embedding scores/bm25 scores/tfidf scores as features. This is worse than a simple embedding score unfortunately.</li> </ul> <p>Thanks a lot for the code! I like how different all the retrieval pipelines in this competition looks, and all are equally creative!</p> <p>I don't think the that <code>bge-large-en-v1.5</code> reranker is slow (my whole re-ranking pipeline, including text splitting and so on, only takes ~30 minutes for 4k questions and 300 text chunks per question). </p>
Kaggle - LLM Science Exam
🏅Top 6% - 149 Rank solution (Ensembling techniques + Training/inferencing)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><strong>Kaggle Competition Write-Up</strong><br> <em>Acknowledgments:</em><br> I wish to extend my heartfelt gratitude to Kaggle for hosting this remarkable competition, which has been a tremendous learning experience. The Kaggle community, with its collective wisdom and camaraderie, played a pivotal role in shaping my journey. I've learned invaluable lessons from fellow competitors, and their willingness to share knowledge has been an immense source of inspiration.<br> <strong>Datasets Used:</strong><br> For this competition, I harnessed the power of two vital datasets:</p> <ol> <li><strong>60k Dataset by <a href="https://www.kaggle.com/cdeotte" target="_blank">Chris Deotte</a>:</strong> This dataset laid a strong foundation for my approach. It provided me with a wealth of data to train and fine-tune my models.</li> <li><strong>40k Dataset by Chris Deotte:</strong> This dataset further complemented my training efforts, enabling me to capture diverse patterns in the data.<br> <strong>Retrieval Process:</strong><br> A key element of my strategy was the ingenious TF-IDF technique provided by <a href="https://www.kaggle.com/code/mbanaei/86-2-with-only-270k-articles" target="_blank">MB</a>. This technique proved to be a game-changer, working wonders in enhancing the quality of my retrieval process.<br> <strong>Inference:</strong><br> To bolster my inference and retrieval capabilities, I utilized the extensive 270k MB dataset. This vast resource significantly contributed to the accuracy and reliability of my predictions.<br> <strong>Models:</strong><br> My model portfolio consisted of a variety of powerful components:</li> <li><strong>DistilBERT (DiBERTa) Models:</strong><ul> <li>I trained two DistilBERT models, one on the 60k dataset and the other on the 40k dataset.</li> <li>The 60k dataset-trained DiBERTa achieved a remarkable public leaderboard score of 0.901. I harnessed the openbook technique for this <a href="https://www.kaggle.com/code/cdeotte/how-to-train-open-book-model-part-1" target="_blank">model</a>, and I want to acknowledge the invaluable reference provided by Chris Deotte's notebook.</li> <li>The DiBERTa model trained on the 40k dataset exhibited a competitive score of 0.84.</li></ul></li> <li><strong>Other Models:</strong><ul> <li>I also experimented with other models to explore their potential:</li> <li>Longformer, though untrained, exhibited a respectable score of 0.83.</li> <li>The <a href="https://www.kaggle.com/datasets/itsuki9180/llm-kaggle-awp" target="_blank">ITK</a> model, provided for the competition, achieved a solid score of 0.82.(using just to introduce some different aspect to my ensembling model)</li> <li>A pre-trained DiBERTa model showed promise with a score of 0.88.<br> <strong>Ensemble Model:</strong><br> In pursuit of a robust and competitive solution, I employed an ensemble approach. The final ensemble model was meticulously crafted as follows:</li></ul></li> </ol> <ul> <li>I executed an averaging operation between the two DiBERTa models. While I did not anticipate a substantial change, this step helped improve the overall stability of my model.</li> <li>Subsequently, I integrated a voting mechanism that is provided by <a href="https://www.kaggle.com/code/radek1/an-introduction-to-voting-ensemble" target="_blank">radek</a>. This algorithm made its decision based on the consensus of all models, and it would only replace the primary answer, which was derived from the DiBERTa averaging, if all other models unanimously agreed on an alternative answer.<br> <strong>Final Score:</strong><br> After all these efforts i was lucky enough to get public leaderboard score of 0.904 and a private leaderboard score of 0.902. <br> Special thanks to:<br> <a href="https://www.kaggle.com/mbanaei" target="_blank">Mb</a> , <a href="https://www.kaggle.com/cdeotte" target="_blank">Chris Deotte</a>, <a href="https://www.kaggle.com/radek1" target="_blank">Radek</a> and many more </li> </ul>
Kore 2022
10th place solution (Team Breakfast)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kore 2022 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This a summary of the 10th place solution by Team Breakfast, and of our journey to get there. Our agent is purely rule-based.</p> <h1>Preface</h1> <p><a href="https://www.kaggle.com/vladikrajnak" target="_blank">@vladikrajnak</a> and I started this competition to get into reinforcement learning. Since the action and observation spaces are huge, our initial goal was to have an RL-based macro strategy for long-term planning, while using heuristic rules for micro-management. We started with <a href="https://www.kaggle.com/egrehbbt" target="_blank">@egrehbbt</a> solution and tried to improve on it. Our rule-based agent started to perform quite well, even being in the top 5 for a while. We were having a lot of fun discussing and implementing our own strategy ideas, so we ditched the original RL plan, and focused on a pure rule-based agent.</p> <p>On a side note we also considered using tree search, but the Kaggle simulator was way too slow for that, and we had neither the interest nor the skills to rewrite a much faster one.</p> <p>Like many others, we are particularly grateful to:</p> <ul> <li><a href="https://www.kaggle.com/egrehbbt" target="_blank">@egrehbbt</a> for sharing their 1st place solution for the beta competition. Not only it was a great starting point in terms of strategy, but as rather unexperienced programmers we also learned a lot about clean and efficient coding from their agent.</li> <li><a href="https://www.kaggle.com/jmerle" target="_blank">@jmerle</a> for providing an amazing visualization tool. It helped us tremendeously with debugging and analyzing games, and we would not have been able to go that far without it.</li> </ul> <p>Our top agent is one that we submitted 2 weeks before the end of the competition. We implemented many changes in the last 2 weeks: delayed first expansion with an agressive early attack, safer routes (to prevent fleet attacks), safer launches (to prevent shipyard attacks after launching many ships away), setting the miners destination so as to dispatch them where needed (either shipyard with more kore mining potential or weak shipyard). Yet none of them had a significant effect: to our surprise all our latest agents ended up with similar scores (1550-1600), despite behaving quite differently. </p> <p>Compared to the top players' agents, we think we lacked the coordinated attacks, and shipyard abandonment when about to be taken over. These were presumably key features for reaching a higher "layer" in the leaderboard. We actually had those on our to-do list, but we did not have time to think them through carefully, and chose to prioritize other features in the limited time we had left.</p> <h1>Building our agent</h1> <p>Our first focus was to make all <a href="https://www.kaggle.com/egrehbbt" target="_blank">@egrehbbt</a>'s estimates (kore-equivalent value of a mining route, number of ships for a successful attack of shipyard, etc) more accurate.</p> <p>We used board.next() to predict the future board states (updated after every launch, provided we had enough computing time left in the current turn), and we implemented an accurate (though very time consuming) route evaluation algorithm. This slowed down the code a lot, but improved our mining, which we thought was essential (especially to be able to dominate very early in the game).</p> <p>On the other hand, our estimates of the number of ships needed for a successful shipyard capture, or the max number of ships we can send mining without endangering our shipyard, did not work as well as we hoped. Probably because they were too conservative (based on a worst-case scenario) (as also pointed out by <a href="https://www.kaggle.com/competitions/kore-2022/discussion/340157" target="_blank">qihuaz</a>).</p> <p>We kept the sequential structure of <a href="https://www.kaggle.com/egrehbbt" target="_blank">@egrehbbt</a>'s agent, but reorganized it a bit, and rewrote most of the functions to make them more efficient, accurate, and/or to allow some anticipation.</p> <h3>1. Defend shipyards</h3> <p>Similar to the original: spawn if under attack, and if needed launch help from neighbor shipyards. For launching help, we tried various strategies: </p> <ul> <li>launch all available at the last moment (original)</li> <li>launch all available right now</li> <li>launch only if the shipyard under attack cannot self-protect by spawning alone in the next turns</li> <li>spawn/gather incoming miners until there are enough ships to successfully defend, then launch the number needed</li> </ul> <p>While short-sighted, the second method turned out to perform best. Maybe because it allows moving ships around and mining on the way as side effects.</p> <p>Improvement needed: flee if we cannot defend, instead of spawning. </p> <h3>2. Capture shipyards</h3> <p>In addition to improving the estimate of the number of ships needed, we also added anticipation: if an attack is possible within in a small number of turns, we stop mining and spawn in anticipation of the future attack launch.</p> <p>Improvement needed: coordinated attacks.</p> <h3>3. Direct attacks / Adjacent attacks</h3> <p>The goal of direct attack is to steal kore from an opponent's fleet, while adjacent attacks are suicide missions aimed at destroying ships through double damage. We always prioritized direct over adjacent attacks, and used relevant (different) scoring for each. For adjacent attacks, we also choose a route that minimizes the kore gathered along the way.</p> <p>One question here is how much risk to take. Attacking a fleet next to an enemy shipyard is certainly going to result in a counter-attack. On the other hand, launching fleets only if they are 100 % safe results in essentially no attack being launched.</p> <p>However optimizing this balance does not seem crucial, as both our careful and our careless agents essentially scored the same. We would actually argue that fleet attacks are not essential to wining (unless maybe if the opponent is completely careless/very agressive).</p> <h3>4. Expansion</h3> <p>We tried to find a good expansion criteria and new shipyard location, also trying to adapt to the opponent's behavior (e.g. attacking agressively if they are first to expand, expanding more conservatively if we are first).<br> But in the end our best agent expands using very basic rules:</p> <ul> <li>expand when possible (enough ships in shipyard)</li> <li>choose the best location within a distance 4-8 of the current shipyard based on the most kore nearby, forbidding locations which are closer to an opponent's shipyard than to ours.</li> </ul> <p>We also added some anticipation there (if it's time to expand but there are not enough ships in the shipyard, stop launching miners and spawn instead, until we can launch).</p> <h3>5. Spawn/Mining</h3> <p>We kept <a href="https://www.kaggle.com/egrehbbt" target="_blank">@egrehbbt</a>'s criteria for choosing between max spawning and mining, and also chose the best route based on expected kore per step.</p> <p>We thought having a more accurate expected kore would give us an advantage, but looking at other published agents it does not seem as important as we thought. For the number of ships to launch, we launch min_fleet_size if the route is considered safe (crude heuristic, though not the original one), and all available ships otherwise.</p> <p>We added the option to prevent a launch that would endanger a shipyard (a young shipyard containing zero or few ships could be easily captured), it is unclear whether it is better with or without.</p> <p>We spent a lot of time in the last 2 weeks building a more extensive list of routes to choose from, making the evaluation routine faster, optimizing the choice of the fleet size, choosing destination shipyards in a smarter way… but it did not make the slightest difference in the scores.</p> <h1>Conclusion</h1> <p>We did not learn about RL as planned, but we learned a lot about debugging and coding in Python. In particular the 3-second time constraint forced us to identify bottlenecks and optimize calculations (we stuck to Python though), and to make strategic choices about where to allocate that time.</p> <p>A side challenge we had to deal with is that unlike most of the top players, we were actually 2 people working on the same code at the same time. Since we could not get rapid feedback from the leaderboard on whether a new feature was actually good, redundant lines of codes and incompatible flags started to accumulate, and versioning/merging became a nightmare. We kept it under control by communicating every day (at breakfast time!) about who was going to do what, but this really became a hindrance in the last weeks.</p> <p>On the plus side it was really enjoyable to have a partner in this adventure, obsessing over it as much as I did! It was our very first programming competition, none of us has a CS background nor has dev experience, so having <a href="https://www.kaggle.com/egrehbbt" target="_blank">@egrehbbt</a>'s well-structured code to start with is what allowed us to participate. We devoted it all our free time in the last 2 months so we are very pleased with the result (and very pleased it's over, too…)</p> <p>We look forward to the next competition!</p>
Kore 2022
1st place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kore 2022 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you!</p> <p>Hi Aurore of team Breakfast! I recall you managed to hit my first constructed shipyard with an aggressive strategy every now and then. Did you write a specific pattern for that? Is that where you name comes from, breaking the enemy fast?</p> <p>I did not use board.next(). I implemented my own system for keeping track of things. One difficulty I would see with the board.next() method is taking into account actions added in the current step, especially when you have multiple shipyards. How did you account for newly launched fleets?</p> <p>Thanks Ravi!</p> <p>Thanks! You as well with your prize finish.</p> <p>Thanks Marek!</p> <h1>The competition</h1> <p>Thanks <a href="https://www.kaggle.com/bovard" target="_blank">@bovard</a> and all fellow competitors for the great fun during this interesting competition. I joined this competition to get into reinforcement learning, but while playing around with the great Kore Beta 1st place solution by <a href="https://www.kaggle.com/egrehbbt" target="_blank">@egrehbbt</a> I managed to get up in the ranks quickly, mainly by changing the expansion strategy. I decided to focus on the rule-based approach and see how far it would get me. </p> <p>My approach was a bit how one would develop a software product. I had a list of issues, kept all code under version control and did iterations solving a few issues and bugs at a time. For this the excellent visualization tool by <a href="https://www.kaggle.com/jmerle" target="_blank">@jmerle</a> was vital. It provided me with all overviews I needed to identify why I lost a certain episode.</p> <p>When I joined the competition it was dominated by <a href="https://www.kaggle.com/shuntarotanaka" target="_blank">@shuntarotanaka</a> and <a href="https://www.kaggle.com/qihuaz" target="_blank">@qihuaz</a>. At some point I managed to get to 3rd spot after which I dropped again to 6th or 7th. By then 1Musketeer <a href="https://www.kaggle.com/itswin" target="_blank">@itswin</a> jumped towards #1, a position which he managed to keep for weeks mid competition. His approach was very powerful and hard to beat. Thanks <a href="https://www.kaggle.com/itswin" target="_blank">@itswin</a> for sharing your agent with us.</p> <p>At some point I managed to get to the #1 position with my V92 agent. This agent kept that position for a few weeks. I actually had a very hard time beating it myself, even though I made many changes that I thought would be beneficial for sure. It took me two weeks and 26 updates to get agent V118 that was consistently better in 1v1 matches. That agent turned out to also do well against other competitors and the lead that I managed to acquire was astonishing, about 200 points or so compared to #2. </p> <p>Then on the second day after submission end I looked into the gaze of the basilisk <a href="https://www.kaggle.com/basilisk1337" target="_blank">@basilisk1337</a>. I saw a match that caught me off-guard against one of his last day submissions. It turned out he found a way to paralize my first shipyard, making me unable to expand. The fix would have been straightforward, but I guess it was the beauty of the basilisk gaze that I could not even make my agent look away. After that match basilisk quickly collected points from all my agents, as they shared the weakness and quickly he rose to #1. Then it seemed a game of chance, one day he was #1, then the other I was #1. The convergence of the ranking seems to be a bit unstable. I have some thoughts, but can share those in a different post.</p> <p>Then on the last day of match execution my V127abandonrescuecollideattack (version 127/146) managed to get the lead in this competition. It was actually the last version where I made changes in the rules itself. All subsequent versions were variations in the value of the parameters to tune the rules. To get to the winning agent was a lot of coding and a lot of trial and error, a combination of introducing new rules and tuning of parameters.</p> <p>That said, I learned a lot during this competition:<br> • I ran into timing problems, requiring me to optimize route search and apply more caching<br> • I ran into memory problems, requiring me to dive into cache management<br> • At some point I got an account the Github CoPilot, which improved my coding productivity considerably (particularly for the parts of code that one hates to write, such as print statements)<br> • Improved my debugging and general Python coding skills</p> <p>This competition was great fun, thanks to all of my fellow competitors. Perhaps our agents will meet again in a future simulation competition!</p> <h1>The winning agent</h1> <p>My agent is based on the Kore Beta 1st solution by <a href="https://www.kaggle.com/egrehbbt" target="_blank">@egrehbbt</a> and ported by <a href="https://www.kaggle.com/realneuralnetwork" target="_blank">@realneuralnetwork</a>. While one will probably recognize a lot, most of the sections were adapted. According to Kaggle I added 5923 lines and removed 777.</p> <p>The agent conceptualizes the following action phases, as inherited from egrehbbts solution:<br> • Shipyard defense<br> • Fleet attacks: adjacent and direct attacks<br> • Shipyard attacks<br> • Expansion<br> • Mining<br> • Spawning</p> <p>I largely kept this framework in place, but added two large concepts: <br> • Board state: <br> For each point on the board, for each point in time the agent keeps a ledger of the actual damage and the damage potential of both myself and my opponent.<br> • Routing:<br> Given an objective the agent calculates the score of (almost all) potential routes that could be chosen</p> <p>Applied to the agents phases, along with some additional tweaks:<br> • Shipyard defense: I calculate whether or not I am able to defend an attack. If so, I spawn and send reinforcements from other shipyards. If I cannot defend, I added an abandon pattern, where my fleet leaves the shipyard undefended and tries to: attack another shipyard, rescue incoming fleets, attack smaller attacking fleets, jump to another shipyard, or just leave the shipyard and attack it after its loss a few turns later.<br> • Fleet attacks: I split up adjacent attacks and direct attacks due to the difference in survival. An adjacent attack is intended to die, so it had a different scoring criterium. For direct attacks it would be more or less a mining route with an attack included. I spent a lot of time managing both the execution and prevention of direct attacks, which helped me to create a good representation of the board state.<br> • Shipyard attacks: rather than collecting all ships at one shipyard to create a maurading fleet, I decided to use a distributed approach. I calculate whether I can combine the fleets from different shipyards to beat an enemy shipyard. At first I had a hard time beating musketeer, who had an excellent adjacent attack pattern, but then I managed to route the fleets to minimize adjacent attack risk.<br> • Expansion: this is where I got the quickest gains compared to the Beta 1st solution. My spot picking was a balance between kore, crowding and damage positions. I tried several parameters here, usually tweaking at least one with every agent submitted.<br> • Mining: this has been core in my strategy, though turned out to be tricky. I chose to maximize mining efficiency per ship per tick. In fact, the complex routes that I saw <a href="https://www.kaggle.com/shuntarotanaka" target="_blank">@shuntarotanaka</a> use prompted me to introduce the routing approach that could try almost all possible routes. This helps me in maximizing kore early game. What I do have a bit of a problem with is that my mining is a bit like a swarm of locusts, mining kore so fast that the kore spot is depleted and more or less useless afterwards. I saw others preserving their nearby kore spots in a better way. So I needed to match this strategy with aggressive play to leverage the fast mining of kore.<br> • Spawning: the agent usually spawns if it doesn’t mine or if it has plenty of kore</p> <p>Even though my agent is rule-based, it is a collection of patterns which turned out to behave in ways I did not anticipate at first. My shipyard attack patterns, though designed for late game, were for example quite effective in taking over too risky first new enemy shipyards.</p> <p>All in all, I had great fun!</p> <p>Thanks all for reading!</p> <p>Thx! Very sharp eye! Am I correct that you applied this trick only in your final submissions? When did you first notice this piece of code? </p> <p>Thanks devastator!</p> <p>Thanks. Nice to hear the routes served as inspiration. I used a tree search algorithm to try the routes and tuned complexity based on the amount of time I still had on the bench. Also, I used a lot of caching. <br> The shipyard destinations were set by a redistribution pattern which decided which shipyards should be destinations. This set of destinations is an input for the routing algorithm.</p> <p>Thanks Vladi, you also congrats on the gold medal.</p> <p>Thanks Juan!</p> <p>Yes correct, I believe they will start charging $10 per month soon. I registered during the beta period while it was still free. </p> <p>Thanks Siya!</p> <p>Sorry, looked over this one it seems.</p> <p>I used a max route length parameter to decide on the depth of the tree. That was context dependent. Flight plan was a factor in pruning the tree. </p> <p>Part of the caching was in-between turns, e.g. max flightplan length based on fleetsize.</p> <p>Redistribution was based on risk of being attacked. </p>
Kore 2022
20th place solution: rule-based economic model
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kore 2022 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you! Here are a few notes on valuation:</p> <ul> <li><p>Expected income is determined by mining rate (based on average fleet size) and average kore per square (including squares with zero kore), with estimates of depletion and kore growth over time.</p></li> <li><p>Ship values are the individual ship's contribution to expected income, discounted by 1% for each time step</p></li> <li><p>Ship net present value (NPV) is just the ship value minus initial cost (10).</p></li> <li><p>For base values, I determine how many incremental ships I would be able to build with this shipyard over time, and sum up the NPVs of the additional ships. If I exhaust my kore, have sufficient capacity, etc. then extra bases add little value.</p></li> <li><p>I also reduced base production capacity somewhat to account from time lost due to fleet launches. Having fewer fleets per base reduces this "lost time."</p></li> <li><p>Note that this treats bases purely as production sources. They also have value in terms of reducing round-trip times, decreasing response times, and exerting control over the board. I didn't factor any of these in.</p></li> <li><p>For new base NPV, it's just the base value minus the value (not the cost!) of the 50 ships. This can be negative if the ships are more valuable than the base.</p></li> <li><p>For base captures, I look at the value of the base both to me and my opponent. The total value is the sum of the two.</p></li> </ul> <p>I tried various other valuation approaches, including a LightGBM regression model to predict end game outcomes, but these were less stable and caused weird behavior in my fleets, so I reverted back to this economic modeling approach in the end.</p> <p>I picked 1% as a discount rate just by eyeballing the total asset growth experienced in some of the competitors' games. I planned to do some hyperparameter testing for this but never got around to it. I don't believe the specific discount rate used had a major impact on my agent's behavior.</p> <p>I treated code efficiency as a big deal throughout the competition, since 3 seconds is a very short time. My most time critical pieces of code were written in Cython, and I tried my best to keep the remaining Python code reasonably vectorized and efficient, using NumPy arrays rather than kore helper objects whenever feasible. Without these optimizations my solution would need to be vastly different (and probably not as good).</p> <p>I also implemented a throttling mechanic to prevent timeouts. Once I started eating into my time bank my agent would start progressively limiting its lookahead window and scaling back the number of trial paths it computed. I always wanted my agent living right on the edge, using all of the compute time available to it. Most of my timeouts happened when I was in the lead, so it made sense to opt for speed over strategy at that point. After some experimentation I was able to prevent timeouts entirely.</p> <p>Sure, code is now available at <a href="https://github.com/pinky1812/kore" target="_blank">https://github.com/pinky1812/kore</a></p> <p>Congratulations to all competitors. Here's a summary of my solution, which is entirely rules-based. I attempted to treat this as an economic modeling problem combined with path optimization (described at <a href="https://www.kaggle.com/competitions/kore-2022/discussion/336804" target="_blank">https://www.kaggle.com/competitions/kore-2022/discussion/336804</a>)</p> <p><strong>Momentum</strong></p> <ul> <li>Build a look-ahead "momentum" model of the board for the next 30 turns</li> <li>Build balance-of-power matrices which show maximum force projection for the players for each square over the 30 turn horizon</li> </ul> <p><strong>Valuation</strong></p> <ul> <li>Estimate the net present value (NPV) of a new ship, assuming no combat, using a 1% discount rate</li> <li>Estimate the NPV of a new base, assuming the new ship NPV, existing production capacity, estimated income, and kore</li> <li>Zero out the new base NPV if I already have more shipyards than the enemy but also fewer ships, to avoid over-extension </li> </ul> <p><strong>Priority Spawning</strong></p> <ul> <li>Create a priority queue for new ship construction</li> <li>Any shipyard which is under active capture threat gets critical priority for new production as well as kore set aside for production in future turns, if needed</li> <li>Neighboring shipyards which could reach the threatened yard get high priority for new production</li> <li>Assign normal priority to all other shipyards</li> <li>Create SPAWN actions for high and critical priority shipyards only. If we run out of kore, add those yards back to the launch queue.</li> <li>If there are any critical or high priority spawns going on, disable new base building</li> </ul> <p><strong>Launch vs. Normal Priority Spawning</strong></p> <ul> <li>In the early game, wait for enemy to make the first launch so we might be able to opportunistically get a bigger payoff from a foray into the mid-board</li> <li>If I have less than 21 ships at a shipyard and kore to spend, prioritize spawning over launching</li> <li>Otherwise, launch!</li> </ul> <p><strong>Risk Adjustment</strong></p> <ul> <li>If I have a numerical advantage, adjust the balance-of-power matrices in my favor so I begin taking more risks (i.e. penetrating deeper into enemy territory than I would normally deem safe).</li> </ul> <p><strong>Strategic Labelling of Home Bases</strong></p> <ul> <li>Estimate the economic value of each base (for me and the competitor).</li> <li>If the base is actively threatened, assign this value to the square for each time step prior to its capture</li> <li>Also assign the same value to destroying the inbound fleet</li> <li>If my current fleet is large enough to prevent the capture, ignore negative balance-of-power when defending home bases</li> </ul> <p><strong>Strategic Labelling of Enemy Bases</strong></p> <ul> <li>Estimate the economic value of each base (for me and the competitor).</li> <li>Assign the value to this square but use a 1% time discount to encourage my fleets to capture it sooner</li> </ul> <p><strong>Balance of Power Strategy Labelling</strong></p> <ul> <li>Note: this portion of the algorithm implemented in Cython for speed</li> <li>For each shipyard, including enemy shipyards, look at each arrival time step and its implications on the balance-of-power matrix for neighboring shipyards.</li> <li>If a home shipyard's balance-of-power is always positive, ignore it. It should be safe.</li> <li>If not, see if our arrival at a given destination in a given time step can shift the balance-of-power in our favor</li> <li>If it can shift it entirely positve, assign 50% of the yard's value to the destination</li> <li>If it can shift only some time steps positive, assign at most 5% of the base's value to the destination depending on the proportion of time steps which have flipped positive.</li> <li>Do the same for the enemy bases, however only assign the maximum value of the base which I am threatening since I can't simultaneously capture multiple bases with the same fleet.</li> </ul> <p><strong>Create Trial Paths</strong></p> <ul> <li>Create a list of possible fleet sizes: full size, half full, 21, 13, 8, 5, 3, and 2. If I already have a very large number of ships at the shipyard, skip some of the smaller sizes.</li> <li>Create a "zombie" fleet size of "full size". Zombie fleets try to avoid mining kore and hone in on enemy bases or fleets as quickly as possible.</li> <li>Calculate minimum efficient time to be "ships at shipyard" divided by "fleet size"</li> <li>Calculate minimum bottleneck time as the turn at which our kore reserves will run out if we produce every turn. I.e. try to avoid arriving back home in a turn when the base would like to be producing new ships.</li> <li>Run optimal path finding algorithm on each possible fleet size, constrained by the minimum efficient time and minimum bottleneck time</li> <li>If our new base NPV is positive but we don't have enough ships, create a maximum time constraint so that fleets will coalesce into larger sizes so we can build the base.</li> <li>There is also effectively a maximum time constraint applied in the strategy labelling step, depending on which shipyard destination the fleet is heading to.</li> <li>Select the best path among the trial options, optimizing for best kore-per-time-per-ship among the paths with maximal strategic value, constrained by the maximum and minimums described above. </li> </ul> <p><strong>Zombie conversion</strong></p> <ul> <li>If my best path is an attempted base capture but there's some possibility it will fail, re-run the algorithm using the same fleet size as a zombie and use this path instead.</li> </ul> <p><strong>Zombie march</strong></p> <ul> <li>Look for vulnerable enemy bases that my fleet ignored but which I could capture given enough air support from other fleets. </li> <li>Do another trial path using a zombie fleet targeted at this base only. If this gives me a higher value, use this path instead.</li> </ul> <p><strong>Base building</strong></p> <ul> <li>If my new base NPV is positive and greater than my best flight plan value, create a new base instead of my original plan</li> <li>Look for suitable base locations, ideally 6 squares away near lots of kore (by absolute value) and kore "seeds" of any size (i.e. future mining potential). Give higher weight to kore located on the same X or Y axis. Try to build dominance in the best areas (e.g. center, sides, corners) before expanding elsewhere.</li> <li>Use the path optimization algorithm to define the path to the new base location</li> </ul> <p><strong>Normal priority spawning</strong></p> <ul> <li>For any shipyard not launching fleets, create a SPAWN action if we still have kore left</li> </ul>
Kore 2022
2nd place solution - Rule based
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kore 2022 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2>Introduction</h2> <p>My solution, like many others, is based on the baseline solution provided by <a href="https://www.kaggle.com/egrehbbt" target="_blank">@egrehbbt</a>. I'll try to describe the changes I've made to it and hopefully, I won't miss anything important. If you have any questions, feel free to ask them.</p> <h2>Planning</h2> <p>First important change I made was that I added planning to all the Actions. If I wanted to create a new shipyard, but didn't have enough ships, I would check whether I would have enough in the next 5 moves. If the answer was yes, I would either spawn ships or do nothing. Same was true for fleet and shipyard attacks. For mining, planning was a bit different. I would compare my current best route's score(expected kore divided by route's length) to the future ones. If the future ones were sufficiently better, I would wait(i.e. spawn or do nothing).</p> <h2>Main shipyard</h2> <p>It turns out that you often don't need a lot of ships to mine effectively. This is because increase in % mined is logarithmic with regards to the fleet's size. In fact, many competitors noticed the opposite problem - mining too well and quickly depleting kore near their shipyards. So, what to do with all those ships that are not needed for mining? You can either create new shipyards or use them for offense. I decided to do the latter(which wasn't always the right choice). I chose one shipyard to be my main shipyard, based on distance to the closest enemy. If I had more shipyards that were equally close, the one with most ships(taking into account incoming fleets) would be the main one. Then I would send ships to that shipyard until it had a certain % of my total number of ships. Percentage ranged from 55 to 85, depending on how close the closest enemy shipyard was. I also had some other rules for distributing ships between shipyards, but those aren't as important so I won't describe them here. Having 70%+ of my ships at a single shipyard often allowed me to take over enemy's shipyards with ease, specially if they were close to me.</p> <h2>Converting leads into victories</h2> <h3>big_attack</h3> <p>Sometimes I would lose games even though I had a large advantage(though this was more common when playing against my own agents). To combat this, I created big_attack - a macro level strategy, which involved multiple shipyards and would last 10-15 steps. If a condition was met, I would choose 1 shipyard to lead the attack. Other shipyards would spawn for as long as possible before sending ships to the chosen one. Then the chosen shipyard would attack. I considered doing coordinated attacks like others(i.e. attacking from multiple shipyards at the same time instead of aggregating ships at 1 shipyard first), but decided against it because I didn't think it would make a big difference.</p> <h3>crash_ships</h3> <p>Another way of dealing with this was crash_ships, which is similar to "whittle attacks" used by <a href="https://www.kaggle.com/itswin" target="_blank">@itswin</a> . If I had lots of ships at some shipyard(usually my main shipyard), I would attack an enemy's shipyard even though I couldn't capture it, with the intent of reducing that shipyard's ship count and causing other difficulties.</p> <p>Some of my agents had crash_ships or big_attack turned off(I always used at least 1 of them). Agents with only big_attack did slightly better than the agents with only crash_ships, and the best agent had both turned on.</p> <h2>Expansion</h2> <p>Expansion is one of the most important parts of Kore 2022. Even seemingly small changes in the expansion strategy would produce drastically different outcomes. There are 2 main questions here: <strong>where</strong> and <strong>when</strong>.</p> <h3>Where</h3> <p>Instead of just looking at the target point's nearby kore, I weighted the values based on distance. If a point was closer to the target point, its kore would be multiplied by a higher number. Other than that, I tried to expand not too close to the enemy and not too close to my other shipyards. I had 3 separate rules(for the first expansion, second, all the others), but the differences aren't significant enough to write about.</p> <h3>When</h3> <p>As time went on, I gravitated more and more towards faster, less safe expansions. It wasn't obvious whether this was good(specially because my agents with reckless expansions were doing worse initially), but I saw that agents in the top 5 weren't as aggressive with their early attacks, and so I thought I could get away with it. In the end, I was right. Even though it took them longer to converge, agents with unsafe expansions did noticeably better. My first expansion was the least safe, while for others I calculated safety based on the number of ships, available kore and distance to the closest enemy.</p> <p>Other notable changes I made to the original solution are:</p> <ul> <li><p>any shipyard could expand at any point(while in the original only the shipyard closest to the point could expand at it)</p></li> <li><p>I would never create a new shipyard if the number of my shipyards was greater than the enemy's by 2 or more(including ongoing expansions).</p></li> </ul> <h2>Defense</h2> <p>Improving defense was very important in order to ensure that I could survive at least some of the early attacks. 3 major changes I made to the original solution are:</p> <ul> <li><p>Mining with guard ships during the attack(the only difference between guard ships and regular ships was that guard ships had to be back in time to defend, so their routes were usually much shorter)</p></li> <li><p>Sending defense even when it's not on time(up to 8 steps late). Sometimes I wasn't able to defend a shipyard, but that didn't mean I should give up on it. This sometimes allowed me to recapture a lost shipyard.</p></li> <li><p>Sending defense early. In the original solution, no matter how many ships you had (at other shipyards), shipyards would wait until the last moment, when they would send ships to the attacked shipyard. But sometimes they would have enough ships long before that, and it was ineffective to wait so long(attacks like crash_ships could exploit this weakness), so I would send defense as soon as I had enough ships.</p></li> </ul> <h2>Fleet attacks, counter-attacks and mining</h2> <p>Not much to say about this. Besides planning which I mentioned earlier, I did mostly the same as others(added more routes, checked safety, direct/adjacent attacks, attacked fleets that were attacking my fleets etc.), but I think this is the weakest part of my solution and the one that could be improved the most. In many games, my fleets were getting attacked a lot and I would lose large amounts of kore because of it. Making mining safer gave me worse results both locally and against others. This means that my way of measuring safety wasn't good enough and I needed to take more details into account.</p> <h2>Preventing first expansion</h2> <p>In the original solution's expansion code, there's a rule:</p> <pre><code> if incoming_hostile_fleets: continue </code></pre> <p>This is generally a good idea, but it can be exploited - by sending very small attacks often. I<br> added this to my final submissions and to my surprise, it worked. Well, sort of. It increased my rating by ~150 points(my best rated agent without this attack is 1755), but it didn't change my position on the leaderboard.</p> <h2>Closing thoughts</h2> <p>While I enjoyed the competition and the psychological aspects of it, I think it's unfortunate that so many solutions were rule based. I'm looking forward to the future competitions and hope to see more RL there.</p> <p>Thanks. It's possible, but I think it's unlikely. Only 3 of my agents had that attack and their later matches were almost all against <a href="https://www.kaggle.com/harmbuisman" target="_blank">@harmbuisman</a> (In the last 10 days, one of my top agents played only 4 games against a different opponent).</p>
Kore 2022
3rd place solution - attack the opponent's second shipyard
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kore 2022 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Firstly, I'd like to thank <a href="https://www.kaggle.com/bovard" target="_blank">@bovard</a> and all participants. During the competition, I had great fun repeating the improvement of the agent.</p> <h1>Summary</h1> <p>As well as many other participants, my agent is based on the solution provided by <a href="https://www.kaggle.com/egrehbbt" target="_blank">@egrehbbt</a> and I improved the operations such as mining, building shipyard, defending shipyard, and so on. </p> <p>Overall, many features can be computed analytically, so there seemed to be few that would benefit from using statistical methods such as reinforcement learning. It took about three weeks to prepare the reinforcement learning, but the benefit seemed not to be very large, so I decided to change the direction after about a week of learning.</p> <p>I think many operations are almost the same as other agents in the gold and silver medal. On the other hand, attacking shipyard operations would be relatively characteristic and this contributed to pushing my agent into the prize range, so I'll concentrate on the explanation of this operation and only briefly describe the others.</p> <h1>Attack the Second Shipyard</h1> <p>The following is the agent obtaining the best score at the final evaluation. </p> <p><a href="https://www.kaggle.com/competitions/kore-2022/leaderboard?dialog=episodes-submission-27412843" target="_blank">https://www.kaggle.com/competitions/kore-2022/leaderboard?dialog=episodes-submission-27412843</a></p> <p>As you can see, in most matches in which the agent wins, the agent succeeded in attacking the opponent's second shipyard. This tendency holds even when the agent matches with agents in prize winners. On the other hand, the agent rarely loses matches due to the failure of the attack on the second shipyard. This would mean that judging whether the attack would be successful or not was very accurate and effective even in the high rating range. For this accurate judgment, some preparations were necessary. </p> <h2>Simulator</h2> <p>At the beginning of the competition, I implemented the simulator. This estimates the board status for the next 25 turns in the case that both players don't give any instructions. One of the characteristics of the competition is that it's easy to estimate the future state of the fleet with some accuracy since the flight plan of the departed fleets has already been determined. Therefore, the estimated result was very valuable. </p> <h2>Estimation</h2> <p>At every turn, the agent estimates how many ships are needed to attack each opponent shipyard and how many ships are available at each allied shipyard. </p> <h3>Estimate how many opponent ships can be used to defend</h3> <p>By the simulation result, how many ships at most can be in the target shipyard and its surrounding shipyards at each turn can be estimated. Therefore, the agent estimates how many ships the opponent agent can use for defending depending on the turn to start the attack. </p> <h3>Estimate how many opponent ships can be generated</h3> <p>In addition to the existing ships, generated ships before the attack is completed can be used for defense. Therefore, the agent calculates how many ships to use for defense can be generated based on the turn to attack, estimated kore at each turn, and each shipyard's ability to generate ships at each turn.</p> <h3>Estimate how many allied ships can be used for attack</h3> <p>By the simulation result, the agent can estimate the number of ships to be used for the attack at each turn. In case the agent can utilize the collision with an allied fleet explained below, the agent includes the number of ships of the collided allied fleet.</p> <h2>Collision with an allied fleet</h2> <p>To increase the number of ships for the attack, the agent searches the allied fleet in the middle of the pathway to the target opponent's shipyard to be attacked. This search is relatively easy if the agent has the simulation result because the agent can get the information of which turn and where allied fleets will exist. If there is a fleet in the middle of the pathway, the agent selects the path plan to collide with the allied fleet.</p> <h2>Judgment of attack</h2> <p>Based on the estimated number of ships needed to attack and of ships available for attack, the agent decides whether it launches the fleet for attack or not. In addition, if there is a chance that more than the number of ships needed for the attack will be available within 8 turns, then the agent waits for the turn and doesn't launch the new fleet.</p> <h2>Keep the number of ships at the shipyard</h2> <p>By the algorithms explained above, the agent can select the attack operation with precision if it happens to come across a situation where the attack is enabled. To increase the chance of the situation where the attack is possible, the mining plan was decided, so that the number of ships that the opponent's nearest shipyard would have would be secured after at least seven turns at each shipyard. This strategy degrades the mining efficiency somewhat, but increasing the chance of attack contributed to the win rate.</p> <h1>Other Operations</h1> <p>Apart from attacking shipyards, the following operations were used:</p> <ul> <li>mining</li> <li>defense</li> <li>move</li> </ul> <p>If there is a shipyard at risk of being taken if attacked, the agent moves some ships from a shipyard with a sufficient number of ships to the shipyard.</p> <ul> <li>convert</li> <li>attack fleet</li> <li>escape</li> </ul> <p>If it is deemed likely that the shipyard would be taken, the shipyard launches a fleet to the safe shipyard.</p> <p>=============</p> <p>Again, I enjoyed the competition very much and was able to get my best placement! </p> <p>If you have any questions about my agent, please ask me freely in the comment. Thanks!</p>
Kore 2022
4th place solution - rule based
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kore 2022 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you!</p> <p>Thanks! I hope the write-up is helpful.</p> <p>Yea, it was tricky how to judge the value of the kores. Taking the face value is probably not the best choice, and anything else would be subjective.</p> <p>I actually also have something like NPV to penalize longer mining routes in the early game. The rational is that in early game a few more ships make a big difference, I should prioritize shorter routes, spawn ships quicker insteand of wait and collect a bit more.</p> <p>However, I found that even a modest discount rate (0.98 at first, annealed to 0 over 150 steps) makes my agent weaker somehow. I found this rule logical in theory but just not showing its worth. So I still keep this rule, but set the discount rate at 0.995 (annealed to 0 over 150 steps), small enough that I don't see noticeable disadvantage</p> <p>Thank you and congratulations on the gold medal!</p> <p>Thanks for the confirmation. It seems like many of us employed similar strategies, just with different implementations.</p> <p>Thank you and congrats to you too!</p> <p>I didn't count, but my impression is that defending fleets with merging is utilized in almost every game if the game is long enough (300+ steps). </p> <p>I actually assumed that you have these features already since your agents are so powerful. I couldn't imagine how much stronger your agents would be if you implement it…</p> <h1>The Journey</h1> <p>First of all, I'd like to thank the Kaggle team for hosting such an interesting competition. Given that most of the competitions with prizes on Kaggle are centred on supervised learning, e.g. computer vision, NLP, etc, Kore 2022 is really a fun one in the sense that competing is just like playing a game and it is hard to get bored with it.</p> <p>Like many others, I started off with Reinforcement Learning but soon switched to rule-based agents after discovering the Beta 1st place solution by <a href="https://www.kaggle.com/egrehbbt" target="_blank">@egrehbbt</a>. I was quickly convinced that RL alone cannot win this competition. Even if someone managed to use RL to get to the top, it will probably need to be coupled with actions suggested by rule-based heuristics.</p> <p>At this point, It is probably not a surprise that my agent is based on the Kore Beta 1st place solution by <a href="https://www.kaggle.com/egrehbbt" target="_blank">@egrehbbt</a>, so huge thanks to <a href="https://www.kaggle.com/egrehbbt" target="_blank">@egrehbbt</a>. It laid a super solid foundation for my agent. Even though I changed/added a lot of logic and rewrote most of the functions, that framework is just crucial and I am just standing on the shoulders of a giant.</p> <p>Also a big thanks to <a href="https://www.kaggle.com/jmerle" target="_blank">@jmerle</a> for his great visualization tool. I probably used it a thousand times. It made debugging so much less painful.</p> <h1>The Agent</h1> <h2>Strategies</h2> <p>Before diving into the details, I would like to share some thoughts at the strategic levels:</p> <ul> <li>Defense is more effective than offence, and so is given higher priority</li> <li>Collecting kore that you cannot spend quick enough is futile. </li> <li>Reactive actions (rather than proactive actions) can give you competitive advantages.</li> </ul> <h2>Tasks</h2> <p>My agents try to finish the following tasks (almost) sequentially.</p> <ul> <li><p>defend_shipyards</p></li> <li><p>capture_shipyards</p></li> <li><p>expand</p></li> <li><p>defend_fleets</p></li> <li><p>suicide_attack</p></li> <li><p>roundtrip_attack</p></li> <li><p>greedy_spawn</p></li> <li><p>mine</p></li> <li><p>spawn</p></li> </ul> <h3>Defend Shipyards &amp; Capture Shipyards</h3> <p>Nothing fancy here. When a shipyard is attacked, I evaluate nearby shipyards to see if reinforcements from nearby shipyards are enough to defend the attacked shipyard. If so, mobilize them for defence.</p> <p>When I want to attack/capture an enemy shipyard, I assume all its nearby shipyards would help with defence to their maximum capacity and I would only attack if I can win in that case. I also launch fleets from multiple shipyards to capture a single shipyard. I am aware that this is prone to adjacent attacks, but didn't have the time to fix it like <a href="https://www.kaggle.com/harmbuisman" target="_blank">@harmbuisman</a> does in <a href="https://www.kaggle.com/competitions/kore-2022/discussion/340035#:~:text=but%20then%20I%20managed%20to%20route%20the%20fleets%20to%20minimize%20adjacent%20attack%20risk" target="_blank">his bot</a>.</p> <p>one thing I never quite figure out is my own future spawns. When kore is limited and shipyards compete for the resource for spawning, I may miscalculate how many ships a shipyard can spawn and mess up my own defence.</p> <h3>Expand</h3> <p>One basically needs to decide: 1) when and 2) where to expand.</p> <p><strong>when to expand</strong>: One should expand when kore collection is faster than the speed of consumption</p> <p><strong>when NOT to expand</strong>: If the ships/shipyards ratio would be significantly smaller than that of the opponent.</p> <p><strong>where</strong>: it is a balanced selection based on potential profit (nearby kores) and risks (nearby shipyards). </p> <ul> <li>Only consider the cells (potential new shipyard location) whose two closest shipyards are friendly shipyards (or one if I have only one shipyard)</li> </ul> <p>The candidate locations are then scored by the sum of nearby kores.</p> <ul> <li>'nearby' is defined to be within a maximum radius, which is in turn determined by the number of total shipyards (more shipyards, smaller radius). </li> <li>nearby kores are discounted if there is already a friendly shipyard near it.</li> </ul> <h3>Defend fleets</h3> <p>If a fleet is under attack, check if the following is possible:</p> <ul> <li>launch a new fleet to join (merge with fewer ships) it to over-power the attacker</li> <li>launch a new fleet to absorb (merge with more ships) the attacked fleet so the attacked fleet changes path to avoid the attack</li> <li>launch a fleet for a suicide mission to weaken the attacker, so the attacked allied fleet can win the final battle.</li> </ul> <h3>Suicide attack &amp; Roundtrip attack</h3> <p>suicide_attacks are the fleet launches that do not expect the fleet to come back to bases, and roundtrip_attacks otherwise.</p> <p><strong>suicide_attack</strong><br> suicide_attack includes the 'adjacent attacks' that many of you are familiar with, but it also includes some other situations. For exmaple, send out a suicide_attack fleet to offset an incoming enemy "adjacent attacker".</p> <p>Routes are evaluated to choose the one that collects the least kore.</p> <p><strong>roundtrip_attack</strong></p> <p>The roundtrip_attacks are enhanced with a few features:</p> <ul> <li>If the available ships are not enough for an attack, <strong>actively check if afriendly fleet nearby can be absorbed to make the attack possible</strong>.</li> <li>If the available ships are not enough for an attack, check if spawning more for a few time steps would make an attack possible</li> <li>If the same fleet can be attacked at a later time, do it later (more kore looted, less time for opponent to react).</li> <li><strong>the attack is implemented as a mining mission</strong> that has to pass a specific cell at a specific time (or two if it has to absorb a friendly fleet before attacks) so that maximum kore can be collected along an attack mission.</li> </ul> <h3>Mining</h3> <p>This part takes most of my time, in terms of both coding time and in-match computation time. Maybe too much time on it, but this definitely give me an advantage.</p> <p><strong>Irregular mining routes</strong></p> <p>Enhancing mining is one of the few things that I did at the very beginning since the advantage is obvious. I started exploring irregular mining paths (paths that are more complicated than rectangular or L-shaped) early and that allowed me to stay on the top of the LB for a while before other competitors caught up.</p> <p>I precompute a set of mining routes (about 80k+, far less than all possibilities) for a centred shipyard, load it and adapt to the actual shipyard locations in run time. The way I evaluate the expected collected kore can be found in this <a href="https://www.kaggle.com/code/solverworld/computing-speed-for-kore-harvesting-routes/comments" target="_blank">discussion</a>. <a href="https://www.kaggle.com/solverworld" target="_blank">@solverworld</a> did an amazing job <a href="https://www.kaggle.com/code/solverworld/computing-speed-for-kore-harvesting-routes/comments" target="_blank">profiling different approaches</a> .</p> <p>After some optimizations (caching, precomputing), I still only manage to evaluate about 30k routes (all shipyards combined) within the 3s per step limit, so I had to prune the candidate routes using various heuristics.</p> <p><strong>Risk control</strong></p> <p>The other thing I did to enhance mining is to avoid routes that are not safe from enemy attacks. My way to achieve this is to maintain a 3-dim "net power map", indicating at each time step for each cell, how much damage can/would be made (positive for friendly power, negative for enemy)</p> <p>A slice of it looks like this:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1716193%2F67bdb68eefb95d3e1f58311a6e93e28f%2FWeChat%20Image_20220727152825.png?generation=1658950116633958&amp;alt=media" alt=""></p> <p>This map helps my fleets avoid most attacks, especially in the early game. However, it is not meant to be 100% safe. I found that if I try to avoid all "attackable routes", the routes would be too conservative and give away too many good mining opportunities.</p> <p>I guess many top players have similar tools to manage these risk.</p> <p><strong>Prioritize mining kores in "hostile areas"</strong></p> <p>Soon after I started irregular mining routes, I found that is a double-edged sword. My mining is so efficient that I can easily deplete kores near my shipyards, while there are plenty of kores in "hostile areas" where mining is risky. While the safe and efficient mining gives me advantages in the early game, if I cannot finish the game early the depleted kore became a disadvantage. </p> <p>To solve this, I adjust the values of the expected kore mined. If the cell is located close to enemy shipyards, it is given a higher adjustment factor. The rationale is this: although it is riskier to mine in those areas, those are actually the kores that should be fought for. Unlike the kores near my shipyards, one more unit of kore mined by me in the risky area is one less unit of kore minable by the enemy.</p> <p>=========</p> <p>There are many many more rules and details, and it would be too cumbersome to explain them all, so I just picked some of the important ones and share with you in above. I am more than happy to discuss in more details if you have specific questions.</p> <p>Thanks for reading and I look forward to see you guys in the next competition.</p>
Kore 2022
60th place solution : A* algorithm for pathfinding
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kore 2022 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I used A* pathfinding for all the main functions, maybe you will find my code useful.<br> All the code and explanations are in <a href="https://github.com/Kreyparion/Astar-Kore22-Competition" target="_blank">my github</a></p> <p>Anyway, it was my first competition and I was very happy to make it this far. I had a lot of fun coming up with my own solution to this interesting problem !</p> <p>Thank you very much 😊</p>
Kore 2022
Writing a fast simulator
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kore 2022 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>During this competition I mainly focused on writing a fast simulator, which I intended to use in combination with search algorithms like MCTS. It was a long journey with several ups and downs, but I wanted to share some of my most interesting findings and caveats on how to build an efficient simulator for this challenge. This post will be a little longer than average, so sit back and relax as I walk you through my thought process.</p> <h1>Beginning</h1> <p>At the beginning, I didn't really know which approach I wanted to try (RL, search, rule-based, all of them). The first thing I noticed when testing kaggle's Kore environment was the speed, it was a bit .. underwhelming. I managed to get a few hundred simulations per second, which would make RL training alone take a long time, but looking at the rules, we were given 3s (+60s total) which made search algorithms like my absolute favorite: Monte Carlo Tree Search (MCTS), largely infeasible given the complexity of the environment. Since I still wanted to try it out, I quickly decided to make a faster simulator. </p> <p>Fortunately, the code of the competition environment is publicly available on: <a href="https://github.com/Kaggle/kaggle-environments/tree/master/kaggle_environments/envs/kore_fleets" target="_blank">https://github.com/Kaggle/kaggle-environments/tree/master/kaggle_environments/envs/kore_fleets</a>. The first thing to notice is that it is completely written in pure Python, so already reimplementing it using another language can give us a significant speedup. However, speeding up core functions using Cython, Numba, Numpy etc. can already be sufficient for your needs. Nevertheless, for search algorithms we typically want maximum performance, so I went for a reimplementation using Rust, a fast and memory-safe language for people who do not like segmentation faults. Using pyo3: <a href="https://github.com/PyO3/pyo3" target="_blank">https://github.com/PyO3/pyo3</a>, I’ve imported my Rust code into python, for usage in my agent script.</p> <p>My first implementation tried to stay as close as possible towards the official implementation, however I’ve made some changes to further improve performance. One of them was regarding flight plans.</p> <h1>Flight Plans</h1> <p>In the original implementation, a flight plan like: <em>"N10S3W2"</em> is stored as a Python string encoded with UTF-8. Using string operations, we can parse the next symbol or digit (which is a bit more complex than just getting the first character) and store the result. This is fine, but we can be even more efficient since flight plans consist of only 15 symbols: 4 directions "NSWE", 10 digits "0,…,9" and "C" the creation symbol. If we add a symbol "␣" for the absence of symbols, we have 16 possible symbols that can be effectively represented by 4 bits. Therefore, I introduced a bit representation of the flight plan where each symbol is represented by 4 bits. Here is my representation table:</p> <table> <thead> <tr> <th>Bits</th> <th>Symbol</th> </tr> </thead> <tbody> <tr> <td>0000</td> <td>␣</td> </tr> <tr> <td>0001-1001</td> <td>1-9</td> </tr> <tr> <td>1010</td> <td>0</td> </tr> <tr> <td>1011</td> <td>C</td> </tr> <tr> <td>1100</td> <td>N</td> </tr> <tr> <td>1101</td> <td>E</td> </tr> <tr> <td>1110</td> <td>S</td> </tr> <tr> <td>1111</td> <td>W</td> </tr> </tbody> </table> <p>Although the table looks arbitrary at first glance, it was carefully selected after several iterations. Thus, a flight plan up to 8 symbols can be represented by an unsigned 32-bit integer and a flight plan up to length 16 by a unsigned 64-bit integer.</p> <table> <thead> <tr> <th>1100</th> <th>0001</th> <th>1010</th> <th>1110</th> <th>0011</th> <th>1111</th> <th>0010</th> <th>0000</th> </tr> </thead> <tbody> <tr> <td>N</td> <td>1</td> <td>0</td> <td>S</td> <td>3</td> <td>W</td> <td>2</td> <td>␣</td> </tr> </tbody> </table> <p><em>Example of representing the flight plan as bit string: “N10S3W2”</em></p> <p>What is the advantage? Faster processing of flight plans (even without heap allocation). We can easily manipulate the plan with simple operations like shifts, and, or operations etc. Extracting the first symbol is just a shift operation, and checking if it is a direction is just checking for &gt;= 1100. If it is a digit instead (and not zero), we can convert it directly to the appropriate digit. After extracting, we can delete the first symbol with a move operation and get the rest of the plan. Do you want to know if two plans are equal or if one plan is longer than another plan? Just compare them with an integer comparison. <br> These are all small advantages, but since we often work with flight plans, it pays off.</p> <h1>Speed</h1> <p>So how fast is this Rust environment with the bit represented flight plans compared to the original Kaggle environment? I did some quick benchmarks by having two balanced agents play a few games against each other (n=100) and counting the average time it took to simulate a step. The number of games is somewhat small, and the balanced agents don't create overly complicated games, so the results should be taken with a grain of salt:</p> <table> <thead> <tr> <th>Environment</th> <th>Speed</th> </tr> </thead> <tbody> <tr> <td>Python kore_fleets</td> <td>~12.9 ms/step</td> </tr> <tr> <td>Rust</td> <td>~0.08 ms/step</td> </tr> </tbody> </table> <p>That’s quite the speedup! and would already allow us to simulate several thousands of board positions, given simple positions (only a handful of shipyards and fleets). Unfortunately, this is still not enough considering the complexity of the game. Can we go faster?</p> <h1>Events</h1> <p>Before I go into detail, let me give a quick example scenario:<br> Let's say we implement an agent, and at the beginning of each environment step we use our environment to simulate 20 time steps into the future. This is valuable information for our agent, for example, we might find out that one of our fleets will return to our home shipyard within 12 turns. I call this a <em>Shipyard-Fleet Collision (SFC) event</em>. After applying some clever rules, we decided on our next action and the next environmental step occurs. We repeat what we did last round and start simulating again 20 time steps into the future, which will take about the same time. However, there is a high probability that we will receive information that we already received. So the same fleet from the last round is still on its way home, and now takes only 11 turns to arrive. This event persisted, and theoretically, did not need to be recalculated again.</p> <table> <thead> <tr> <th>Step</th> <th>Events</th> </tr> </thead> <tbody> <tr> <td>100</td> <td>SFC event { turn: 112, fleet-uid: "90-1", shipyard-uid: "0-1"}</td> </tr> <tr> <td>101</td> <td>SFC event { turn: 112, fleet-uid: "90-1", shipyard-uid: "0-1"}</td> </tr> </tbody> </table> <p>Shipyard-Fleet collisions are not the only events of interest, there are also some others:</p> <ul> <li>Allied Fleet-Fleet collision (AFF)</li> <li>Hostile Fleet-Fleet collision (HFF)</li> <li>Fleet-Fleet damage (FFD)</li> <li>Shipyard creation (SC)</li> </ul> <p>If we know that these events will occur in the future, there is a high probability that they will still occur at the next time step and the step after that and so on. Ideally, we would only need to simulate the game from the first step to the end and then we can use the event information over and over again. Unfortunately, this doesn't work because the actions we take at each step can change whether those events really or not. For example, our fleet arriving in 11 turns could be intercepted by an enemy fleet launched in the last turn. Therefore, the event will be canceled, which is unfortunate for us. But don't worry, even if it was canceled, it can still happen. Next turn we could send out our own fleet to intercept the enemy interceptor before it reaches our fleet, so our fleet will return safely. Even if a event currently does not seem to happen in the future, the information about the event itself is valuable because it still could occur.</p> <table> <thead> <tr> <th>Step</th> <th>Events</th> </tr> </thead> <tbody> <tr> <td>100</td> <td>SFC event { turn: 112, fleet-uid: "90-1", shipyard-uid: "0-1", will_happen: True}</td> </tr> <tr> <td>100</td> <td>Enemy launches fleet to intercept</td> </tr> <tr> <td>101</td> <td>SFC event { turn: 112, fleet-uid: "90-1", shipyard-uid: "0-1", will_happen: False}, HFF event {…}</td> </tr> <tr> <td>101</td> <td>We launch a fleet to intercept the interceptor</td> </tr> <tr> <td>102</td> <td>SFC event { turn: 112, fleet-uid: "90-1", shipyard-uid: "0-1", will_happen: True}, HFF event {…} , HFF event {…}</td> </tr> </tbody> </table> <p>However, if we assume that our fleet is intercepted a few turns later before it reaches our shipyard (we did not send a fleet to help), then the SFC event is guaranteed to not occur. Events depend on fleets, so when they cease to exist, the event will also disappear.</p> <table> <thead> <tr> <th>Step</th> <th>Events</th> </tr> </thead> <tbody> <tr> <td>102</td> <td>SFC event { turn: 112, fleet-uid: "90-1", shipyard-uid: "0-1", will_happen: False}, HFF event {turn: 108,…}</td> </tr> <tr> <td>109</td> <td>No events!</td> </tr> </tbody> </table> <p>What I want to show with this example is:</p> <ul> <li>That there are different types of events (AFF, HFF, FFD, SC, SFC) that are important information for us.</li> <li>These events, once found, do not need to be recalculated.</li> <li>Events can happen or not happen. An event that is about to occur can only be undone by another new event (e.g. interception). However, an event that is not currently not occuring, still may occur in the future.</li> <li>Events are dependent on one or more fleets (there are no Shipyard-Shipyard collisions, in each event atleast one fleet is involved), if these fleets are destroyed, the events are also destroyed.<br> So, how can we use this event system to speed up the environment?</li> </ul> <h1>Precomputations</h1> <p>Let's assume we are in the first round of the game. At this point, no team has any fleets, so there are no events. In the next round, <em>we have already calculated all the future events from the last round</em>. We assume that this always applies for each round. Now we look at any new fleet that has been launched. If no fleet has been launched, there will be no new events (events rely on fleets), but if at least one fleet has been spawned, things will get interesting and the following calculations will be made:</p> <ol> <li>First we precompute the fleet path, i.e. we parse the flight plan directly up to its last symbol and then discard it, saving us the need to do this later on. We get a list of future positions.<br> Note: We do not need to calculate these future positions from start to the end of the game. We stop when the fleet hits a shipyard (SFC event). We can be sure that the fleet will not exist after that, because shipyards are not destructible and can only change their owner, so the SFC event is guaranteed to be the very last event of the fleet, if it is not destroyed before.</li> <li>Now we check this fleet path with all other fleet paths to see if there are any collision/damage events. Calculate all events, even those that will not currently occur in the future. Stop, when we reach an SFC event. <br> All these events are stored locally within the fleet object. So if the fleet is deleted, all the events tied to the fleet will also be deleted. They are guaranteed to not occur.</li> </ol> <p>If we apply this procedure for each fleet, we get all new events. So we only need to calculate events for new fleets, which gives us a strong efficiency boost, all other events are taken from the last simulation step, if they were not already executed.<br> If we want to simulate our new environment for one turn, we just need to:</p> <ul> <li>Execute all actions given by the agents</li> <li>Compute events for the new fleets only</li> <li>Iterate over all fleets and determine all events that might happen in the current turn.</li> <li>Check if these events will occur (e.g., for a HFF event, check if both fleets still exist)</li> <li>Execute the event</li> </ul> <p>There are even some small extra improvements I did for the events, however, I won’t go into details here.</p> <h1>Benefits</h1> <p>Normally we would have to calculate all possible collisions/damage at each time step (even if no collisions occur). So for a fleet-fleet damage we would have to extract all fleet positions and check if two fleet positions are adjacent. On the other hand, the event system tells us directly if there is currently an FFD event, and we only need to verify it, saving us several calculations. Also, we no longer have to parse and store flight plans, and there is another advantage.</p> <h1>Pseudo Rollouts</h1> <p>MCTS consists of several phases, one of which is the simulation phase. In this step, we evaluate a position by playing a so-called <em>rollout</em>. Starting from the current position, we play a game between two (possibly random) agents until the end of the game or until some other criterion is met. Then the position is evaluated (whether it is currently winning or losing for one side) and the result is backpropagated. However, simulating a whole game is very time consuming, so it would be better if we simulate the game only for a few rounds (e.g. 20 rounds) and then evaluate the position. However, here is the problem: the evaluation.</p> <p>Evaluating a position is really difficult. Even if one side has currently more kore and more ships, it still could be losing. For example, the enemy is in the process of capturing our shipyards while our fleets are adrift in space with suboptimal flight plans. An evaluation function would require some understanding of flight plans/fleet trajectories/collisions, which is no easy task. Nevertheless, a position could be more easily and accurately evaluated when there are currently no active fleets. This can be achieved by a special rollout where we prevent both players from performing any launch actions, e.g. both players will only spawn ships. I call these rollouts <em>pseudo-rollouts</em>, as they are not "real" rollouts, since the players' actions are restricted. After all fleets have returned to their respective shipyards or no other events occur, we can evaluate the position.</p> <p>Although pseudo-rollouts have several problems and there is no guarantee that they are useful, there is one good thing: they are very cheap to calculate with our event system. Namely, since no new fleets are launched, no new events have to be calculated, we just have to collect all events, check them and possibly execute them until we reach the last event. The last position can then be evaluated using, for example, the kore, the number of ships and shipyards etc.. Furthermore, these pseudo-rollouts give us valuable information, e.g. whether certain events are currently will happen or not, which can be used in heuristics and rules.</p> <h1>Evaluation</h1> <p>Unfortunately, I don't have direct benchmarks between the event system and my previous Rust environment because I didn't implement an environment interface for it. Also, I added a lot of extra calculations in the pseudo-rollouts for use in heuristics. However, it should be a little faster than the Rust environment with a free pseudo-rollout while using more memory for its precomputations.</p> <h1>How I wanted to use the Simulator</h1> <p>Lastly, although the speed was now starting to be feasible for MCTS, at this point, I’ve only got 1-2 weeks before the deadline and I still had other things to do. I’ve done some initial testing, where I used an Alpha Zero variant, where actions were proposed by heuristics instead of neural networks (which I also tested out using Convolutional LSTMs and the Pseudo Rollout positions as inputs). I’ve implemented some heuristics for mining, attack, defending etc.. Nothing too fancy, since I was already low on time. Each action was ranked according to the heuristics, and then the best couple of them were fed into MCTS as candidate actions, strongly reducing the action space. Within the 3 seconds I got a couple thousands simulations + pseudo rollouts (based on the complexity of the position), but the bot did not really perform well against a rule based baseline. I think there were still some bugs in my MCTS code/ too few rollouts/ too bad evaluation function and my heuristics were maybe still not good enough, but at this point I was not able to continue working on this approach and just uploaded my rule based baseline, which takes the best candidate actions proposed by my hardcoded heuristics, does some simulations with them and takes the best one. I used my event environment for this and did like 50-100 simulation steps + pseudo rollouts, which took my agent between 5-1000 milliseconds. Be aware that the heuristics at that point, were becoming pretty heavy.</p> <h1>Conclusion</h1> <p>Although I couldn't finish my agent, I am still very grateful for this competition, I learned a lot and it was a lot of fun. I thank kaggle for running this nice competition and also <a href="https://www.kaggle.com/jmerle" target="_blank">@jmerle</a> for his great visualisation tool.<br> I hope that you also learned something new and if you want you can share your own experiences/ideas about simulation optimization and search algorithms.<br> Cheers</p>
Kore 2022
[1 Musketeer] 5th place solution: Rules based
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kore 2022 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks! I’m super happy to hear that it was accessible to people who didn’t participate. It was definitely something I was trying to keep in mind when writing.</p> <p>Congrats on 1st! I learned a lot watching our games as you climbed to the top.</p> <p>Sometimes I had difficulty with not realizing a state was initialized, but in general when debugging I just replayed from the last step where I thought the state was “good enough.” Only one of my global variables did I really focus on, so it wasn’t too bad.</p> <p>Of the games Stone talks about, I’ve only really participated in Battlecode, so I can’t speak to the other games. That said, Battlecode is an engineering challenge as much as it is about strategy (e.g. you have limited compute and limited communication, where units run their own program) so I enjoy it a lot. It has its flaws, but if you want something to sink your teeth into Battlecode will absolutely let you.</p> <p>Ah I see. Actually, when I tried making the kore mined more exact (using collection rates) my agent actually did worse than before. It’s hard to decide what makes a route optimal, and in my case having the kore mined be more accurate wasn’t a step in the right direction.</p> <p>I’d guess it’s close to 10 too. And ahh no I didn’t use board.next(). In a lot of cases, anywhere aDg4b estimated and looked ahead, I augmented with some more information. It probably would have been better to use board.next(). I guess it just didn’t occur to me to use it. Where did you specifically notice improvements?</p> <p>In general I agree, every time I considered more information in making a decision, I made an improvement. But qihuaz has a good point, the increased accuracy in the mining routes might have led to some trade-off that I wasn't seeing, so I scrapped it after trying to debug it for a week or so.</p> <p>Yup, unless it's super late game with unmined kore, you can almost always find a route with minimal kore to send an attack along. I also established a hard cap on the amount of kore I would allow whittle attacks to gain at 50.</p> <h2>Preface</h2> <p>First of all, I want to thank the Kaggle team for hosting a lovely competition. I greatly enjoyed seeing the strategies develop over time and reading about people's thoughts about the game on the forums. I wish I had more time to work on my agent, as there was so much more to be done, but I look forward to future simulation competitions on Kaggle! Here are also a few people I'd like to thank in particular:</p> <ul> <li><a href="https://www.kaggle.com/egrehbbt" target="_blank">@egrehbbt</a> for posting his amazing <a href="https://www.kaggle.com/competitions/kore-2022-beta/discussion/317737" target="_blank">code</a> from the beta competition. I used it as a jumping off point and it saved me more time than I could ever imagine.</li> <li><a href="https://www.kaggle.com/jmerle" target="_blank">@jmerle</a> for his AMAZING visualization tool <a href="https://jmerle.github.io/koreye-2022/" target="_blank">Koreye</a>.</li> <li><a href="https://www.kaggle.com/bovard" target="_blank">@bovard</a> for posting the competition on the Battlecode discord, as I probably wouldn't have found it otherwise.</li> <li><a href="https://www.kaggle.com/kirderf" target="_blank">@kirderf</a> and <a href="https://www.kaggle.com/jaroslawbogusz" target="_blank">@jaroslawbogusz</a> for their local evaluation and replay scripts respectively.</li> </ul> <p>I'm a Battlecoder by heart and new to Kaggle, so I took a rules-based approach to this competition. If you don't know what <a href="https://battlecode.org/" target="_blank">Battlecode</a> is, definitely check out Stone Tao's survey of AI programming challenges <a href="https://www.stoneztao.com/blog/posts/ai-challenge-survey/" target="_blank">here</a>. But, in short, it's another AI real-time-strategy game that I treat like a full time job in the (seemingly short) month that it runs.</p> <h2>Overview</h2> <p>At a high level, I followed the same basic decision sequence layed out in aDg4b's Beta solution. Here's an overview of my bot and my thoughts throughout the competition. Where I can, I'll try to give some reasoning behind some of the decisions I made, or talk about things that I tried but didn't end up working out for me.</p> <h3>Defence</h3> <p>See enemy ships? Send reinforcements. That's defence in a nutshell. At a deeper level, my bot checks all of its incoming allied and hostile fleets and calculates the minimum accumulated difference at each time step. If it's negative, it spawns and requests help from other shipyards if either (1) those shipyards can help by the time the closest enemy fleet gets there or (2) I only have a few shipyards and the one under attack is one I built myself.</p> <p>On the other hand, sometimes you simply can't send or generate help quick enough. Maybe your opponent coordinated their ships better than you. Maybe they simply had better efficiency in the early game and out-mined you. In a case where my shipyard realizes it can't defend itself anymore, it defers to any other launching action and sends a hail mary fleet. Most of the time this just served the purpose of sending a (likely large) fleet to reduce the kore around the area, but it had another interesting side effect too. When I've just captured an enemy's shipyard, if they send all of their ships at once to defend and leave another shipyard helpless, my shipyard will launch a fleet to capture it rather than doing nothing and losing the ships. This results in killing their ability to spawn by resetting both (or more!) enemy shipyard's turns controlled. Since Kore is as much a game about getting resources as using them, this normally wins the game on the spot.</p> <p><strong>A failure:</strong> I experimented with sending exactly the number of ships needed for help, but this often backfires as a shipyard that's getting attacked now is also probably going to be attacked in the future. Here I was basically taking the philosophy of overcorrecting now and worrying about other issues later. As long as I didn't end up in the situation above, i.e. sending all of my ships at once, this defence generally performed better.</p> <h3>Offence</h3> <p>My offence related to attacking shipyards consisted of three different types of attacks: captures, coordinated captures, and whittle attacks. In general I was super conservative with offence, only trying to capture when I was <strong>sure</strong> it would be successful. There were some times where this wouldn't be perfect, as I wouldn't be able to predict new mining fleets an enemy would send out, but in most cases taking into account an enemy's kore/cargo for spawning and the reinforcements from nearby shipyards was enough.</p> <ul> <li><p>Coordinated captures took first precedence. This attack consisted of syncing up attack fleets from multiple shipyards to focus on a single shipyard. By not launching them all at the same time, I thought it would make it a little harder to coordinate defence.</p></li> <li><p>Normal captures were the same, except using a fleet from a single shipyard. Shipyards would consider waiting for an attack if it predicted that it could take the shipyard even considering the extra reinforcements the enemy would get in that time. Note that this also had the effect of me being able to choose the maximum kore route to attack the shipyard, since I knew with almost complete certainty that I would convert the shipyard and get the kore.</p></li> <li><p>Whittle attacks were the least common, and generally served to extend an already established lead. Upon gaining a lead in ships, these attacks consisted of sending fleets of size 50 or more to an enemy shipyard, with the idea being that a difference in ships of 120 to 60 is much harder for the opponent to come back from than a difference of 170 to 110. This also had the effect of perhaps forcing the enemy to make suboptimal decisions in the face of incoming hostile fleets. Do they react with the exact amount of ships needed? Do they not do certain actions?</p></li> </ul> <p>A smaller part of my offence was directly attacking enemy fleet to either take their mined kore or deal double damage. The only thing of note I added here was to incorporate a score measurement for attacks to instead give preference to mining routes which were deemed more efficient. Sometimes I would add a feature or fix a bug here and it would actually do worse.</p> <ul> <li>I had an off by one issue where sometimes a shipyard wouldn't attack a fleet even if it could, but fixing that made it attack too often without much gain.</li> <li>I experimented with attacking enemy fleets that were converting into shipyards. I never really saw this activate much, and it was coupled with the previous failed bug fix so I ended up scrapping it.</li> </ul> <h3>Expansion</h3> <p>Ah… my worst nightmare. In my opinion, deciding when and where to make new shipyards is often the most important part of the game. It's a delicate balance between greed and safety. On one hand, expanding early gives you quick access to tiles that were seldom harvested, giving you a surge of kore which can snowball into a lead. On the other hand, creating a new shipyard puts you at a 50 ship disadvantage, so if you can't defend against a quick attack you're out of luck.</p> <p>As for when to expand, I mostly kept aDg4b's logic taking into account the current shipyard spawning capacity and fleet distance. If the current kore was significantly more than what I could spawn before most of my fleets returned, I would make a shipyard. The hardest part of tweaking this was mitigating situations where I would expand too early or late, or make multiple shipyards in a row. To be honest, I didn't find a great solution for this. I first tried enforcing minimum time constraints between building shipyards, but when that didn't work well I settled with lower bounding the perceived fleet distance and spawning capacity for each shipyard. I also established some minimum and maximum constraints on ship to shipyard ratios when expanding.</p> <p>Choosing expansion spots is another challenge. Here's a list of each of the factors that I took into account to rank each tile on the game board. Some of them were inspired from chess principles.</p> <p><strong>Kore:</strong> I evaluated kore at nearby tiles and summed them up, using this as the driving force behind the value of a particular expansion spot. I modified the perceived kore on a tile based on a few things:</p> <ul> <li>Kore is raised to the power of 1.1, serving to give bonus to large kore stockpiles that will give quick turnover after the expansion. This helps mitigate the immediate 50 ship penalty when defending a quick attack.</li> <li>A gaussian distribution decreasing the value of a tile based on the distance from the new chosen shipyard position. I tried out different distributions here that might synergize with the mining routes I expected to launch, but they didn't seem to help much.</li> <li>A linear penalty decreasing the value of tiles close to other friendly shipyards. This avoids creating shipyards too close together.</li> <li>A bonus multiplicative factor that acted to incentivize taking space away from the opponent. Tiles that were hotly contested (read: were roughly equidistant from enemy and friendly shipyards) were given a bonus if the new spot decreased the distance of the nearest friendly shipyard.</li> </ul> <p>The kore sum was then normalized so that the following penalties were not outshadowed.</p> <p><strong>Penalties:</strong> As important as it is for there to be a lot of nearby kore, it's also important to not overextend into enemy territory.</p> <ul> <li>Distance penalties were given based on both the average distance from all friendly shipyards, and the distance to the closest friendly shipyard.</li> <li>An enemy penalty measuring the danger of a spot by considering the size of a fleet that the enemy could send in the near future. The enemy fleet size was also compared to the amount of reinforcements I could send in reaction to it, with extra penalty added the higher the difference was. I spent a lot of time tweaking this and thinking about things like normalization and growth speed. Ultimately, I used a linear factor based on the difference between the distances of the closest enemy and friendly shipyards. I also put a soft cap on this penalty that grew with the size of the enemy fleet.</li> </ul> <p>Once an expansion was triggered, it continued until either it finished, or some exceptional situation caused it to abort, like going under a minimum number of ships or the enemy beginning to stockpile ships for an attack.</p> <h3>Mining</h3> <p>A quick aside: Before a shipyard decides to mine, it checks if we have a kore surplus and are behind in ships. If the shipyard already has some fleets out and has a significantly smaller than the average number of ships per shipyard, it prefers to spawn instead.</p> <p>Choosing what mining routes to send and when was another one of the most important things to think about. I'll split this section into two parts: route generation and route selection.</p> <p><strong>Route generation:</strong><br> Route generation started off simple: choose a point on the board and the closest shipyard and consider all four routes consisting of "L" shaped paths between them. I had an attempt at creating more complex mining routes by considering a second nearby point before the destination, but it often increased the computation by a significant amount without much benefit, so it remains deactivated. By the end, I ended up hard coding in some new route patterns inspired by other competitors' routes.</p> <p>Close to the end of the competition, I started to think more about what destinations a mining route should be sent to. Instead of just the closest shipyard, I would sometimes force all mining routes to go to shipyards that needed more ships, like a newly created one or one that was under attack.</p> <p>If I had more time, I would have spent more time thinking about destinations, as distributing your ships well is a great step towards reacting to enemy threats better. I tried an unsuccessful allocation strategy which would try to keep the number of ships at a shipyard close to a specified target, which was based on its kore production and its danger relative to the size of potential enemy attacks.</p> <p><strong>Route selection</strong><br> Fleet size was chosen based on how quickly my kore reserve could be depleted. With not much kore, it's less important to spawn, so I would send the minimum fleet size, only increasing it if an enemy could launch a fleet to intercept it. Otherwise, I would send all of the available ships. Tieing up a large number of ships in a single fleet or wasting a turn on a really short route are both not good outcomes. As such, I also established some minimum and maximum constraints on fleet and route sizes based on available or total ships. Moreover, an important late addition was to not send large fleets if it resulted in a nearby enemy shipyard being able to launch an attack.</p> <p>I scored routes based on the average kore obtained per turn, except I (mostly) ignored collection rates. This results in a balance of sending more efficient small routes versus more kore-producing long routes. I experimented with some other bonuses and penalties to routes as well. For example, I penalized routes based on the number of turns before it came back where the shipyard could neither spawn nor launch a new fleet. This often had the effect of a shipyard sending routes that were 1 or 2 tiles shorter than the previous one, resulting in a chain of routes that come back at roughly the same time. The hope was that, when I eventually decide to make a new shipyard, I spend less time waiting to launch.</p> <p>I would then launch the highest scoring route, assuming I could launch it and that it passed some other checks (e.g. I wouldn't launch small fleets from shipyards that had better than average spawning production if I had a surplus of kore). Here I also experimented with launching a smaller intermediate route if I could still launch the highest scoring one in the same amount of time, but it didn't perform as well as I would have liked. I think it had a side effect of increasing the average time to launch larger routes, resulting in less kore in the future.</p> <h2>Conclusion</h2> <p>I thoroughly enjoyed this competition. It turned out to be a lot deeper than I initially thought and learned a ton from it and the other competitors. Again, a thanks to the Kaggle team for a great competition. I look forward to future ones! </p> <p>For anybody that wants to take a look at my agent, here is a link to my <a href="https://github.com/itswin/Kore-2022" target="_blank">code</a>.</p>
LANL Earthquake Prediction
#194 solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LANL Earthquake Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Please refer solution of my first medal. <a href="https://github.com/Silver-Top/Kaggle-LANL">https://github.com/Silver-Top/Kaggle-LANL</a></p>
LANL Earthquake Prediction
#6 Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LANL Earthquake Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks <a href="/sishihara">@sishihara</a>. I just started with equal weights between lgbm and xgb (both 0.5), and started adding/substracting 0.1 until the oof score was the minimum (finally adding/substracting 0.05). And then I started adding 0.1 NNet and substracting to lgbm or xgb until getting the minimum oof score. I had some fun doing that.. haha. As you can see XGB worked better for me. I added NNets at the end and they helped to improve the score. I just thought that using an optimization algorithm to find the optimal weights could lead to overfitting given the characteristics of the data. </p> <p>Ah.. some important features for me were related to picks. I calculated them on data in original units and also on acoustic_signal transformed to gaussian distribution: <code> x_roll_std = x.rolling(1000).std().dropna().values for i in [0.7,0.75,0.8,0.85,0.9,1.0,1.5,1.75,2.0,2.25,2.5]: peaks, h = find_peaks(x_roll_std, height=[i], distance=2000) if(h['peak_heights'].shape[0]&amp;gt;1): X_tr.loc[segment, 'peaks_count_' + str(i)] = h['peak_heights'].shape[0] X_tr.loc[segment, 'peaks_mean_' + str(i)] = h['peak_heights'].mean() X_tr.loc[segment, 'peaks_std_' + str(i)] = h['peak_heights'].std() X_tr.loc[segment, 'peaks_max_' + str(i)] = h['peak_heights'].max() X_tr.loc[segment, 'peaks_min_' + str(i)] = h['peak_heights'].min() #Peak prominences prominences = peak_prominences(x_roll_std, peaks)[0] contour_heights = x_roll_std[peaks] - prominences X_tr.loc[segment, 'peaks_prom_mean_' + str(i)] = contour_heights.mean() X_tr.loc[segment, 'peaks_prom_std_' + str(i)] = contour_heights.std() X_tr.loc[segment, 'peaks_prom_max_' + str(i)] = contour_heights.max() X_tr.loc[segment, 'peaks_prom_min_' + str(i)] = contour_heights.min() #distance between peaks X_tr.loc[segment, 'peaks_dist_mean_' + str(i)] = np.diff(peaks).mean() X_tr.loc[segment, 'peaks_dist_std_' + str(i)] = np.diff(peaks).std() X_tr.loc[segment, 'peaks_dist_max_' + str(i)] = np.diff(peaks).max() X_tr.loc[segment, 'peaks_dist_min_' + str(i)] = np.diff(peaks).min() </code></p> <p>That's a tip I learned from Grandmaster <a href="/titericz">@titericz</a> some years ago to try to avoid overfitting when working in problems with few or noisy data ;)</p> <p>The main thing I took from my previous 2 competitions was that spending time in really understanding the problem and analyzing data was key to succeed. So in this one I started by carefully reading all papers, and I quickly realized that the data was from P4677. Actually, I realized it a couple of days before it was disclosed. The size of the train and test sets were 60-40% in both, the competition and the paper, so there was a first reason to think that the test set of the competition was the same. However, there was still a chance that the test set of the competition was different.</p> <p>So my first step was to visually select the experiments (cycles) from the training set, trying to match the length of those in the paper (with a ruler :smile:). At first I picked 9 from the 15 full cycles of the training data. Then I compared some features and realized that they became much more similar between train and test for the selected subset of experiments, which supported that the test set was the one of the paper: <img src="https://i.postimg.cc/DzbwGrjg/Feature-Train-Vs-Test.png" alt="Comparison AbsMean Train vs Test"></p> <p>Using those selected cycles, I realized an inverse correlation between my CV and the public LB, so I totally forgot about the LB. Actually, my 2 selected submissions scored 1.96799 and 1.82806.</p> <p>Nevertheless, I wanted to use more data than just the 9 experiments (the data was already small!), while trying to match train and test distributions. To solved this, I picked 4 different subsets from the training data, I built a model for each of them and I finally blended their predictions. I used 11 cycles from the training sets in total but in different combinations. For each of those 4 training data subsets I followed the following steps: - Feature selection (using shap most important features and also forward and backward feature elimination). I elimianted all features with different distributions in train-test. - Transform features to standard normal distribution, since I saw a shift between train and test: <img src="https://i.postimg.cc/6pPQTj0q/NS-Transformation.png" alt="Transformation to Standard Normal Distribution"> - Build 3 models: LGBM, XGB and a shallow NNet (1 or 2 layers with less than 7 nodes). This shallow NNets worked well (on transformed data). LGBM and XGB tuned using bayesian optimization. - Blend the 3 models selecting the weights by hand (trying to avoid overfitting).</p> <p>This is the summary of the model: <img src="https://i.postimg.cc/Vkv6fQX2/6-Solution-Summary.png" alt="Summary"></p> <p>The oof predictions of the model with the largest weight in the final blending: <img src="https://i.postimg.cc/JnBjHB1V/oof.png" alt="oof"></p> <p>Finally, congrats to all the winners and to those who survived the leaderboard earthquake! I guess my experience with earthquakes as a Chilean helped me a bit :smiley:</p> <p>Nice that it helped you to learn something <a href="/hatomugi">@hatomugi</a> :)</p> <p>I selected different subsets of experiments in order to try to match the test distribution while trying to maximize the total number of experiments used to train (matching distributions while avoiding to use too few training data). For example, you can notice that the subset of experiments with the highest weight in the blending (Val Set3), contains 7 experiments used as training data (1, 2, 4, 7, 10, 11 and 14). But, if you consider the experiments I used in total, the different models are trained on 11 experiments (1,2,3,4,7,8,9,10,11,12 and 14).</p> <p>Besides, I didn't know which of those subsets was the best one to use, so I thought it was better to use all of them ;)</p>
LANL Earthquake Prediction
10th place solution (Amjad's view)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LANL Earthquake Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks. Shuffled CV was not bad, it only gives over-optimistic estimates of the error. Regarding your approach, I don't think I had similar features, but if you like you can compare with my features from my kernel: <a href="https://www.kaggle.com/amjad85/10th-place-feature-engineering">https://www.kaggle.com/amjad85/10th-place-feature-engineering</a></p> <p>I joined this competition almost from the beginning, and I have to say that when you joined, you enriched the discussion in this forum with many interesting thoughts. Thank you for that.</p> <p>I survived the heart attack :D</p> <p>First of all, I would like to thank the organizers for providing an interesting research problem (even though badly designed) and kaggle for providing the computational resources that I used throughout this competition. My biggest thanks of course go to Giba who put his magical touch on my models. By the way, take a look at his views <a href="https://www.kaggle.com/c/LANL-Earthquake-Prediction/discussion/94466#latest-543723">here</a>. You can check out my kernel <a href="https://www.kaggle.com/amjad85/10th-place-feature-engineering?scriptVersionId=15179664">here</a>, and Giba’s kernel that uses it <a href="https://www.kaggle.com/titericz/top-1-lb-2-254-private">here</a>.</p> <p><strong>Preprocessing:</strong></p> <p>I used denoising with discrete wavelet transform (DWT). However, the wavelet length was important here. Short wavelets like Haar filter too much useful information. I used the best localized wavelet 14 (l14 from wmtsa R package).</p> <p><strong>Feature generation</strong></p> <p>Since the early days, one feature stood out, that is zero-crossing. So I generalized this feature to compute the rate of crossing points at various quantile levels of acoustic data. I computed crossing points features on several levels of DWT decomposed signals (started with 5 levels and went up to 10). Then I expanded the one DWT decomposition to three wavelets with different length. In addition to that, I added features representing the mean and variance at rolling windows of the spectrum of the signal. I also had some features counting peaks, which I took from public kernels and translated to R, and some autocorrelation features from the tsfeatures R package. I had also statistical features computed on different levels of DWT, but we didn’t end up using those given their different distribution.</p> <p><strong>Cross validation</strong></p> <p>I used shuffled KFold stratified by both time and quakes to enhance the features. This had the lowest variance among different strategies, and correlated well with public LB. I kept an eye open on other cross-validation schemes to know what causes overfitting. Towards the end, I also implemented nested cross validation, which is more robust than a simple leave-one-quake out cross-validation.</p> <p><strong>Model training</strong></p> <p>I started with Catboost at the beginning and regretted it given how slow it is. Once I switched to LGB, I immediately jumped the public LB.</p> <p><strong>The twist</strong></p> <p>Transferring the prediction to test distribution was pulled out by Giba at the last moment. See his post for explanation.</p> <p><strong>Thing that didn’t work for me</strong></p> <ul> <li>Classifying long vs short quakes</li> <li>Augmentation with overlapping segments</li> <li>MFCC features, rolling features, skew, kurtosis</li> <li>Deep learning. My best model was 2-channel 1d-CNN (mean and sd) – bi-GRU scored 1.499 on public LB.</li> </ul> <p><strong>My tips to beginners</strong></p> <p>I’m not new to machine learning, but this competition was my first project with gradient boosting, so it was a great learning experience. Also I’m pretty much new to kaggle competitions. I participated in one some years ago and it was a struggle on my crappy laptop. Now it’s a much better experience. Here are some tips to other beginners: - Spend some time on making your code runs faster and easy to develop - Start with a ML algorithm that runs fast for development - Spend some time implementing your cross-validation</p>
LANL Earthquake Prediction
11th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LANL Earthquake Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Using the quantile model was a lot better (CV of about 0.1) than modeling TTF directly in local validation, assuming that both models have access to the validation mean and median repeated cycle lengths. I compared the quantile approach to direct models with different normalization schemes to match the validation average/median statistics.</p> <p>I used local validation to select the hyperparameters of the LightGBM and Neural net and averaged 10 random seeds/initializations for models trained on all or the last 6 complete training earthquake cycles. Local validation was split by time - models trained up to the kth earthquake were evaluated on later earthquakes. </p> <p>Thanks for the feedback! I intend to put the code on GitHub but there is some cleaning up to do first. I ended up not using the CPC features because adding a head that predicts the MAE resulted in leakage, which I realized too close to the deadline. As a starting point I would point to <a href="https://github.com/davidtellez/contrastive-predictive-coding">this excellent resource for learning unsupervised features - CPC by David Tellez</a>.</p> <p>Thank you! It seems promising to combine this learning objective with superior feature engineering and selection demonstrated by your team and others.</p> <p>Many congratulations to the winners, The Zoo, you truly stood out on the private leaderboard! Also congratulations to all teams that survived the private earthquake or learned something during the competition. @Keita111 (+4024 to claim a gold with 2 submissions) is a great example that we should sometimes focus less on the public leaderboard even though we all know we should :-). </p> <p>Special thanks go to all forum contributors. People like @CPMPml, <a href="/mykper">@mykper</a>, <a href="/scirpus">@scirpus</a>, @Abhishek and several others are what make Kaggle great and I am deeply grateful for your contributions.</p> <p>Let’s start with what didn’t end up in the final submission. My efforts during this competition were mostly wasted on modeling the gaps (every 4095/4096 observations) in the data. The assumption was that if one could detect these gaps, it would be possible to order the test chunks, making the test prediction problem trivial. This model works perfectly on the validation data but it doesn't on the test data. I am certain that the test data is not contiguous as stated and is actually divided into 150,000 observation chunks with random gaps in between. It is a bit sad that the organizers ignored many good questions about the data distribution in the “Additional info” topic, requiring the competitors to make many assumptions. Side note: the gap analysis identified test chunks 1db8e8, 35a2d7, 35dd45, 395e0e, 62a403, 996c37, a35e7c, d1eee8 as the start of an earthquake cycle, adding another piece of information to believe that the test data really comes from the test part of P4677.</p> <p>The second biggest chunk of my time was spent on unsupervised learning of features using the raw high frequency data with recurrent networks. The code was inspired by <a href="https://github.com/davidtellez/contrastive-predictive-coding">David Tellez's</a> implementation of <a href="https://arxiv.org/pdf/1807.03748.pdf">Contrastive Predictive Coding</a>. The hope was that this would lead to robust features for training and testing. I embedded each chunk of 1,500 observations to a shared embedding of size 32 and defined the following loss heads: - Predict if one embedding directly follows (with some small random gap) another embedding - Predict if a pair of embeddings comes from the same data chunk of 150,000 observations - Predict if an embedding comes from the train or test data - Predict if a pair of embeddings comes from the same earthquake - Predict Time To Failure (TTF)</p> <p>The first three losses are trained for both the train and test data of which the third one is trained using <a href="https://arxiv.org/pdf/1505.07818.pdf">domain adversarial training</a>. The last two losses can only be trained for the train data. Even though I ended up not using any of these models, it became clear that it was very hard to distinguish if raw observations come from the same earthquake. That led me to redefine the learning objective in the final approach.</p> <p><strong>Actual submission</strong> An alternative way to specify the learning objective is the following: predict the quantile TTF of an earthquake, meaning that the target is 0 at the beginning of an earthquake cycle and 1 for the last 150,000 observation chunk of each earthquake. Assuming that this model is trained to minimize the MAE of the quantile, what should each (1-prediction) be multiplied with to minimize the <strong>MAE</strong>? The median of [the earthquake lengths, repeated by the earthquake lengths]! For example, if the test data consisted of earthquake lengths 6, 3 and 2 this would result in median(6, 6, 6, 6, 6, 6, 3, 3, 3, 2, 2) = 6 * (1-quantile_pred) as the optimal predicted time to failure. The nice thing about this reformulation is that we can use the data from the P4677 experiment (I estimated the test median earthquake TTF to be 12) without having to break your head over what earthquake cycles to train on.</p> <p>The models themselves use basic FFT and quantile features for each 150,000 observation chunk and 100 1,500 observation subchunks. This resulted in 40*101 = 4040 features which were fed into LightGBM (feature fraction of 0.003) and neural networks. The first final submission averages the LightGBM and neural network predictions starting from the last 6 complete earthquake cycles. The second submission uses all data except for the first incomplete earthquake cycle (given that the cycle length is unknown we can’t determine its quantile). A nice property of this approach is that the predictions don’t change much when training on different folds so in hindsight it would have been better to submit with different estimated median private test earthquake times.</p> <p>I look forward to your feedback!</p> <p>Thanks for the feedback. The quantile TTF values are simply computed by dividing by the earthquake cycle length.</p> <p>Your second question is quite interesting and I am surprised nobody else seems to have used this post-processing approach. Let's imagine that the quantile method is perfect and let's keep the example with three cycles of lengths 8, 4 and 3 in mind. What should the prediction then be if there is no way to tell what cycle a chunk of 150K observations belongs to? Each cycle would have an average absolute error of |cycle_length - 2*prediction_mean|/2 - The error would be cycle_length - 2*prediction_mean at the beginning of a cycle and decrease linearly to 0 by the end of a cycle. The average absolute error (MAE) would be Sum[P(cycle_i)*|cycle_length_i - 2*prediction_mean|/2]. This error is minimized when you set the prediction mean to median of [earthquake length, repeated earthquake_length times]/2. In the example that would lead to an MAE of 8/15*|8-8|/2 + 4/15*|4-8|/2 + 3/15*|3-8|/2 = 31/30 = 1.0333... If we set the prediction mean of the rescaled predictions to the true prediction mean the MAE is worse! The prediction mean of the 8, 4, 3 example is (8*8+4*4+3*3)/(2*15) = 89/30 - MAE of 8/15*|8-89/15|/2 + 4/15*|4-89/15|/2 + 3/15*|3-89/15|/2 = 1.10222... You can see why this is worse by realizing that if you change your mean prediction to a value less than 8, it is worse for more than half of the predictions (8/15) by the same value as it is better for 7/15 of the predictions. This optimal transformation still holds in expectation if the model predictions are unbiased estimates of the median quantile. Had there been one/two very long private test cycles, it would have made a bigger difference.</p> <p>C should be twice the prediction_mean. I have adapted my [8, 4, 3] example since I made a "divide by two" mistake to compute the prediction mean which canceled out in the next step. Thank you for making me aware! The conclusion remains unchanged.</p> <p>To be precise, the quantile targets were set to (number of observations at the end of each 150K until a TTF reset )/(number of observations in that cycle-149999). That way you have training targets that decrease linearly from 1 to 0. I also made sure not to introduce training chunks that overlapped the TTF reset.</p> <p>A cycle length is indeed the duration of the earthquake until a failure (reset of TTF)</p> <p>Does prediction_mean mean the scalar you multiply with the quantile predictions to get the final TTF prediction - Yes, to be precise, the prediction is (1-quantile_prediction) * C.</p> <p>The explanation assumes that the quantile model is perfect, so the error for quantile q is: (1-q)*|cycle_length - 2*prediction_mean|.</p> <p>Hello again, all code has now been published on <a href="https://github.com/ttvand/Kaggle-LANL">Github</a>. The unsupervised part can be found <a href="https://github.com/ttvand/Kaggle-LANL/blob/master/Logic/train_valid_test_cpc.py">here</a>. Let me know if you have any questions.</p>
LANL Earthquake Prediction
12th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LANL Earthquake Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>As kaggle tradition dictates, one shall share if he/she/they win the gold. So here I am, writing my very first solution post.</p> <p>I would like the thank kaggle and the LANL for hosting such an interesting competition, drawing fellow kagglers/data science hobbyists' attention to these earth acoustic data which otherwise would not have been made well known.</p> <p>I would also like to thank <a href="https://www.kaggle.com/yifanxie">Yifan</a>, who had introduced me to kaggle 3 years ago and given me many valuable guidance and advice, as well as other kagglers I was fortunate to team-up with in the past competitions. I have learn very valuable experiences from you. It's exactly this, the openness and the sharing spirit that made kaggle the most wonderful place to learn, exchange ideas/knowledge and improve. </p> <p>Now about how I landed the 12th position. (I am sorry about the long post. I just wrote as my memory rewind so the text lacks structure)</p> <p><strong>The thought process:</strong> From my past competition experiences, I find the most important thing in any competition is that one must make the best effort to understand the data.</p> <p>This was exactly what I was aiming to do at the beginning. I was using a lgbm model with public features, but made many attempts on different CV strategies in order to understand why there is such a big different in the CV score. </p> <p>I began with just stratifying by target, then by the 17 earthquakes, then manually selected groups of earthquakes. These experiments led to the following understanding:</p> <ul> <li><p>earthquakes with similar length can CV very well. for example earthquake #1 lasted 11.54 seconds, #10 - 11.42s, #11 - 11.02s. If you do a 3 fold cv on these three, you get very good MAE. Even for longer earthquakes like #2. #7 and #14, CV result is very good and reliable. </p></li> <li><p>the problem is between the groups. If you train a model on shorter earthquakes your model will consistently underestimate long quakes, and model trained on long quakes will consistently overestimate the short ones.</p></li> </ul> <p>I have then made some brief attempts to classify different groups of earthquakes. However the more I thought about it the more I realized there are many levels of challenges involved. To make a maybe inappropriate metaphor, predicting an earthquake is like predicting a man's death, you can probably predict accurately his aging speed, the eventual end however has many random factor involved that is simply unpredictable. That's why similar lengths earthquakes CV very well.</p> <p>I have also thought about estimating the test data distribution between different type/lengths of earthquakes, maybe using some kind of Gaussian mixture model, and then quickly realized this is not possible. And it is this thought process that play an important part in my final submission. I was trying to image what the test data would look like given the above understanding. The competition designer would probably want to know if someone can come up with a generic model that does well on earthquakes of different length. So my hypothesis is: it's less likely that the test blocks of 150000 data points are taken randomly from different earthquakes. Instead it's probably from a number of complete quakes of different lengths. This means the test data distribution should like very much like that of the training data, i.e. uniform on the left and start to taper off on the right due to different earthquake lengths.</p> <p>So my final two submissions are based on two CV strategy.</p> <p><strong>Technical Detail</strong></p> <p>The wining CV strategy is based on the following earthquake splits: below 9s (class 0): 0, 3, 5, 6, 8, 12, 13 between 10-12s (class 1): 1, 9, 10, 11, 15, 16 above 12s (class 2): 2, 4, 7, 14 note I put 0 in class 0 after confirming on CV results. The CV is a repeated stratifiedKfold of n=2, repeat=5. The first fold is used for making models CV'ed within each earthquake class and make prediction for the second fold. Then within the second fold is another repeated stratifiedKfold of n=2, repeat=5 to stacking.</p> <p>for the first fold training I have used a lgbm model, a xgb model, 2 svr models, a kernel ridge and a NN model. So basically the same as what Andrew did in his kernel <a href="https://www.kaggle.com/artgor/earthquakes-fe-more-features-and-samples">https://www.kaggle.com/artgor/earthquakes-fe-more-features-and-samples</a> On top of that I have made another 2-layer GRU model with custom time-series features. This RNN model has respectable validation score very close to the lgbm. I had to remove catboost model at the end because it was taking too long. A few crash/bug fix left me not enough time to do more experiments.</p> <p>for the second stacking fold I have only used a lgbm model and a NN model. </p> <p>I convinced myself that such approach is ok because first of all there is no leakage with split between earthquakes, and secondly I know that similar earthquakes CV reasonably well and I am counting on it to pass some kind of earthquake class information to the stacking model, and lastly I used the repeated stratified kfold to exhaust different earthquake combinations as much as possible to cover different cases and exploit the power and bagging (if you make enough reasonable attempts, the average is bound to be not far off right?)</p> <p>In terms of features, 90% of the features used were based on public features. I have made some additional 60 features just based on welch power spectral density method. But to be honest I am even sure how much impact they've made. As a result I have about 400 tabular features, and out of laziness I made a further 400 by extracting the same features from first quarter and the last quarter of the same 150000 blocks and subtract them. :'D I had made 45 timeseries features for the RNN model which were also inspired by the public features. I just selected a few and made some adaption based on feature importance analyses from the public kernels.</p> <p>These features are used for fold 1 base model training. Then the tabular features were concatenated with predictions from the base models for stacking training.</p> <p>As a result I have observed reasonable CV improvement at the stacking level (I monitor the score for each earthquake in validation)</p> <p>However the public LB result is way too bad. In fact the wining submission scored 1.61. What gave me the confidence (or courage) to submit it was my hypothesis about the test data distribution (plus i was never in the high ranks throughout so nothing to lose). The ttf distribution is close enough to that of the train (I will update a plot), i.e. gentle/similar to uniform on the left, and tapers off from 10s onwards on the right.</p> <p>The second submission was a blend of different models (lgbm, xgb, cat, NN, and 3 RNNs) using repeat straitfiedkfold result, based on this split 7s (class 0): 6, 8 8s (class 1): 0, 3, 5, 12, 13 9s (class 2): 9, 15 11-12s (class 3): 1, 10, 11, 16, 4 14-16s (class 4): 2, 7, 14</p> <p>The CV score for this strategy for different models were quite consistent between MAE 1.8-2.2. and it has a public LB of 1.449. So I meant for it to be a conservative/submission. But the distribution was not as pretty and it did turn out that it was pretty bad on the private board.</p> <p>I have also made a 1D CNN model hoping that it may capture any potential useful information in the time gaps (This approach was motivate by some interesting discussion a while ago about data sampling, etc.). The model has consistent predictive power, i.e. the MAE is always around 2.3-2.5 for different type of quakes, which I think might be interesting. However I couldn't add it in the stacking as well because it took too long to train and I simply ran out of time.</p> <p>If you made it all the way to here than thank you very much for you patience and I hope this post provides you some useful information.</p> <p>I will definite improve/slim down my posting next time.</p> <p>Thanks you to all the fellow kagglers. </p> <p>Thank you CPMP for you kind comment. I am glad you find interesting information in the post : )</p> <p>Thanks! I see you have lots of competition experience under your belt already. I am sure you will do well in the coming competitions. :D</p>
LANL Earthquake Prediction
15th Place Memo and CV Scheme
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LANL Earthquake Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2>Thank You!</h2> <p>It was my first Kaggle challenge. (... I pursued to the end. I got into <a href="https://www.kaggle.com/c/quora-insincere-questions-classification">Quora Insincere Questions Classification</a> but got distracted with other stuff in live.) I'm pretty happy to have survived the shake up and have finished so high up in the ranking. I'd like to thank the community to share insights and ideas so well (especially to @CPMPml and his <a href="https://www.kaggle.com/c/LANL-Earthquake-Prediction/discussion/92679#latest-543125">no magic</a> feature selection)! That's the spirit of Kaggle! It was a great learning experience, and I will try to give something back here.</p> <h2>What Matters</h2> <p>I made that huge jump in the LB and finished well. I'm still trying to figure out, to what degree it was luck and how much it was skill and a robust model. The shakeup might suggest a large influence of luck. Notching the model slightly, like adjusting the mean, can have a big impact on your standing in the final LB as @sushize showed <a href="https://www.kaggle.com/c/LANL-Earthquake-Prediction/discussion/94324#latest-543708">here</a>. So the final scoring was indeed quite fragile and one had to be lucky to win a gold medal. I don't think, it's pure luck despite the big shakeup, though. We still see the grandmasters in high spots and the winning solutions used models with few features and robust CV strategies.</p> <p>I think this challenge taught us that applying fancy ML techniques is not enough and it sometimes matters more to understand the basics. We have to: * get a good grasp of the problem and understand the data well * build a good and robust CV strategy * avoid overfitting by all means</p> <p>As the data was so little, these key ML ingredients played a major role in this competition and models ware relatively less important. Trust your thinking, think well, and do not apply stuff, you don't fully understand (though try to!).</p> <h2>The Heart: Cross Validation and Significance</h2> <p>After some initial struggle with the large data and spending quite some time trying to get NN's to work, time was getting short for me. That helped me to concentrate on the important. I committed myself to the models that worked best, which were GBM's (I used a blend of LightGBM, XGBoost, and CatBoost in the end). Most importantly, I realised that a robust CV strategy <em>you stick to</em> is crucial.</p> <p>We've seen that the quality of practically all models depended on the actual TTF. They do well for intermediate values and bad for very low or high values of TTF. So, depending on the distribution of TTF in the test / validation portion affects the score a lot. Hence, I tried to split the training data, such that this effect was minimised. At the same time, I got the feeling that I should take entire cycles in or out. Hence, I searched for splits of the 17 cycles into groups of three to four cycles that had similar TTF <em>and</em> size. One of those splits was | cycles | fraction | mean TTF | | --- | --- | --- | | 0, 3, 7, 8 | 21.0% | 5.86 | | 1, 10, 11 | 20.8% | 5.67 | | 2, 12, 15 | 19.9% | 5.68 | | 4, 5, 9, 16 | 19.7% | 5.56 | | 6, 13, 14 | 18.6% | 5.61 |</p> <p>Groups 0 and 16 are the very short "cycles" before and after the first and last earth quake. Note that with this split, I also don't have leakage between overlapping segments.</p> <p>These folds still didn't give me the stable CV scores I wanted. I created some more of these even splits and decided to actually do four of these 5-fold group splits and take the mean MAE of each of the four 5-fold splits, such that I had four scores. This resulted in a rather slow CV scheme, but it was stable and I could not only work with a CV score, but also an uncertainty: <code>np.std(cv_mae,ddof=1)</code>, where <code>cv_mae</code> are the four averaged scores of the 5-folg group splits. These uncertainties were typically of order <code>0.005</code>. Note that this scheme assumes that the (private) test set has the same TTF distribution as the training set.</p> <p>After establishing that, I was in a situation where I could not only test models and features with a robust CV scheme, but also had a measure for what a significant improve in the score is.</p> <h2>What We Predict On</h2> <p>The assumption that the distributions of features and TTF in the test set, I've made in setting up my CV scheme, is quite reasonable. Some said otherwise, but I contradict. It was claimed by @gpreda that <a href="https://www.kaggle.com/gpreda/lanl-earthquake-new-approach-eda">the feature distribution was different for train and test set</a>. I could not find that for my features as I've subtracted the mean of the acoustic data on each segment. That makes physically sense and I already did it before I knew about the comparison.</p> <p>Also the assumption that the TTF distributions are similar should be reasonable. @mykper did some <a href="https://www.kaggle.com/c/LANL-Earthquake-Prediction/discussion/91583">nice work</a> on the public test set. We've seem a somewhat different distribution of the TTF and they also maxed out at some value below 10, whereas the training data had values up to 16. That's true. However, he also found out, that there are only two cycles in the public test set. These were quite typical, if you compare them with the training set. No reason to assume that we will predict on structurally different data than what we have seen in the training data.</p> <p>Also one <strong>note on the public LB</strong>:</p> <p>The public LB was calculated on just about 350 segments. This small number together with the high variance in the predictions alone should tell you that you cannot trust the public LB. Now compare that number with the size of the CV: we use several fold and will eventually use the entire training set for the score. That is 4200 segments. That's more than 10 times as much! When using overlapping segments - as I also did - the ratio gets even bigger, although the information gain will not grow linearly with the number of segments anymore. In anyway: the public LB score is calculated on a tiny set, especially when compared with the CV on the training data. I've never trusted the public LB in this competition.</p> <h2>The Features and Minimalism</h2> <p>I took @CPMPml approach of feature selection he explained in the <a href="https://www.kaggle.com/c/LANL-Earthquake-Prediction/discussion/92679#latest-543125">no magic</a> discussion and augmented it with my uncertainty of my CV scores. As we have little data, and I would have ended up with almost no features, if I'd only taken features with high significance. I decided to take features that improved my model by at least half a standard deviation.</p> <p>Some side note from a physicist: If you have two measurements <code>m_1</code> and <code>m_2</code> with standard deviations <code>s_1</code> and <code>s_2</code>, the difference of the measurements <code>m_1 - m_2</code> does <em>not</em> have an uncertainty / standard deviation of <code>s_1 + s_2</code>, but of <code>sqrt(s_1^2 + s_2^2)</code>.</p> <p>With that approach, I ended up with just 7 features!</p> <h2>Final Words</h2> <p>That's pretty much it. Some ensembeling of different GBM's (LightGBM, XGBoost, CatBoost) with (a little) optimised hyper-parameters and, as I've said in the beginning, probably a bit of luck.</p> <p>There are some things I've seen in other solutions, that I'd like to have used, too. There is the special treatment of outliers as them ABC did (see their approach <a href="https://www.kaggle.com/c/LANL-Earthquake-Prediction/discussion/94407#latest-543731">here</a>). Also nested CV would probably have done some good.</p> <p>Much of the detective work on the test set, was not so fruitful, I think. Despite we even <a href="https://www.kaggle.com/c/LANL-Earthquake-Prediction/discussion/90664">know where it came from</a>. The only think that was useful - and probably quite a bit, although I didn't use it - was the information about its mean TTF. It seems to have helped a few teams. So after all, my assumption that the test set is not as strictly fulfilled as I've assumed in my CV scheme.</p> <p>My seven features are named like that, if it helps: <code>python selected_features = [ 'roll100mean_num_peaks_5', 'spectral_centroid_perc90', 'FFT_1650_1800', 'noise_perc75', 'raw_num_peaks_8', 'spectral_bandwidth_perc90', 'perio_iqr25', ] </code> So not necessarily very straight forward feaatures, but nothing too fancy or even magic either.</p> <p>It should. I've encountered quite a few that don't, though... (Or they forget, get distracted by fancy stuff, ...). It doesn't hurt to refresh the basics from time to time.</p> <p>I don't have the 7 features at hand at the moment, but no, I did not use the mean. I substracted the mean from each and every segment, so I would be zero all the time anyways. (Note, that I did not normalise the standard deviation, as it contains information!)</p> <p>The 7 features are no "magic" features and I'm pretty sure that one could find 5-10 other features that would perform equally good. Using only the features that significantly improve your model prevents overfitting. That's all whats going on here. I had engineered 900 features to choose from. Using all of them on just 4200 training examples (or a bit more with overlapping segments) would give you less than 5 data points per feature... That's badly constrained (although some features correlate strongly) and genearlises poorly.</p> <p>Correct. I didn’t make the training set similar to the test set as there was no (strong enough) evidence we would actually habe a different test set. For me the risk was to high to overadjust or even adjust the wrong direction.</p>
LANL Earthquake Prediction
179th place / top 5% (not submitted) solution - first competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LANL Earthquake Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks <a href="/dhaqui">@dhaqui</a> ! See you in the future</p> <p>Thanks <a href="/abhinand05">@abhinand05</a> ! See you in the future</p> <p>I am not new to Data Science, but that was my first ML competition (apart from 'Homesite Quote Conversion' for testing purposes, three years ago).</p> <p>I will share the process regarding my best (not submitted) solution. I did not trust my CV and got in the [Earth]shakeup!! </p> <p><strong>Scores</strong></p> <p>Public score: 1,86776 Private score: 2,48755 (it would be #179, top 5%; it is worth noting, though, that if everyone else was evaluated by their best solution in private board, the rank would be different). Average CV score: 2,04663 </p> <p><strong>Features</strong></p> <ul> <li>Got about ~150 basic features from public kernel. </li> <li>Generated ~800 new features using tsfresh package. </li> <li>Dropped correlated columns with threshold 0.95. </li> <li>Decided to work with only 50 features. </li> <li>Combined different techniques to rank features, including RFECV.</li> <li>Blended my personal random taste with this rank and obtained the following 50 features: 21 from public kernel: <code> abs_max_roll_mean_1000, avg_first_10000, avg_last_10000, avg_last_50000, iqr, min_first_10000, min_roll_mean_1000, min_roll_std_100, min_roll_std_1000, q01_roll_mean_100, q01_roll_mean_1000, q01_roll_std_10, q01_roll_std_1000, q05_roll_mean_100, q05_roll_std_100, q05_roll_std_1000, q95_roll_mean_100, q95_roll_mean_1000, q99_roll_mean_100, q99_roll_mean_1000, std_roll_mean_1000 </code> 29 from those generated with tsfresh: <code> abs_energy, agg_linear_trend__f_agg_mean__chunk_len_500__attr_stderr, ar_coefficient__k_10__coeff_0, augmented_dickey_fuller__attr_teststat, autocorrelation__lag_4, c3__lag_2, change_quantiles__f_agg_var__isabs_False__qh_0.4__ql_0.2, change_quantiles__f_agg_var__isabs_False__qh_0.8__ql_0.2, count_below_mean, cwt_coefficients__widths_(2, 5, 10, 20)__coeff_0__w_2, energy_ratio_by_chunks__num_segments_10__segment_focus_4, fft_coefficient__coeff_37__attr_imag, fft_coefficient__coeff_66__attr_abs, fft_coefficient__coeff_95__attr_angle, fft_coefficient__coeff_99__attr_real, number_crossing_m__m_0, number_crossing_m__m_-1, number_cwt_peaks__n_5, number_peaks__n_1, number_peaks__n_500, partial_autocorrelation__lag_8, quantile__q_0.1, quantile__q_0.2, range_count__max_1__min_-1, ratio_beyond_r_sigma__r_5, ratio_value_number_to_time_series_length, spkt_welch_density__coeff_2, sum_of_reoccurring_values, value_count__value_1 </code></li> </ul> <p><strong>Model</strong> Obtained the following pipeline, optimized with TPOT: <code> make_pipeline( StackingEstimator(estimator=ElasticNetCV(l1_ratio=0.1, tol=0.001)), Normalizer(norm="l2"), Normalizer(norm="l1"), StackingEstimator(estimator=LassoLarsCV(normalize=True)), LinearSVR(C=15.0, dual=True, epsilon=1.0, loss="epsilon_insensitive", tol=1e-05) ) </code></p> <p><strong>Things I tried that did not work</strong> Augmentation, as shared in this kernel: <a href="https://www.kaggle.com/alinealmeida/basic-feature-benchmark-with-quantiles-augmenting">https://www.kaggle.com/alinealmeida/basic-feature-benchmark-with-quantiles-augmenting</a></p> <p>(but I should have tried it again after the new features I generated...)</p>
LANL Earthquake Prediction
19th place solution (GBDT + post-processing)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LANL Earthquake Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks. I learned a lot from your post in discussion.</p> <p>First of all, thanks to the organizers and all participants in this competition! And thanks a lot to my brilliant teammates.</p> <p>In this thread, I want to share the detail of our submission finally ranked 19th place. Our another submission is <a href="https://www.kaggle.com/c/LANL-Earthquake-Prediction/discussion/94450#latest-544035">described by my teammate</a>. Please check it.</p> <h2>Model</h2> <p>The prediction is made by averaging of LightGBM, XGBoost and CatBoost. Each model is averaged by 25 different seeds. Hyperparameters are borrowed from kernels by <a href="https://www.kaggle.com/artgor/seismic-data-eda-and-baseline"></a><a href="/artgor">@artgor</a> and <a href="https://www.kaggle.com/gideonvos/earthquake-prediction-with-xgboost-lb1-496"></a><a href="/gideonvos">@gideonvos</a></p> <p>We used quake based 5-fold split. The distribution of target values can differs a lot between the folds. Early stopping in this setting easily leads overfitting to validation set. So we use - The same number of rounds of GBDT through folds (like lightgbm.cv) - boost_from_mean = True (or its equivalent) in all models (discussed in <a href="https://www.kaggle.com/c/LANL-Earthquake-Prediction/discussion/91500#latest-528287">this thread</a>) - Splits manually picked from all possible splits of 15 quakes into 5 folds. We picked splits which estimated to have low between-fold variance and mean of validation score.</p> <p>We also trained models predict TSF (time since failure). We used both TTF and TSF in post-processing.</p> <h2>Features</h2> <p>We used feature from <a href="https://www.kaggle.com/vettejeep/masters-final-project-model-lb-1-392">Masters Final Project</a>. Some of them have very different distribution between train and test. So we trained GBDTs which use a single feature, did adversarial validation, and discarded features achieve higher AUC than 0.55.</p> <p>Then, we did a forward feature selection, which iteratively adds a feature with the best gain. It converges after selecting 15 features.</p> <h2>Post-processing</h2> <p>As already discussed, the test TTF distribution can be estimated using the figure shown in the organizer's paper. The figure is a <a href="https://en.wikipedia.org/wiki/Vector_graphics">vector graphics</a>. So it can be expanded arbitrarily without loosing image quality. Using this, we measured peak to peak distance of sheer stress manually and dug out TTFs after train set as the below image. (Of course, it may have some small errors)</p> <p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/544429/13411/estimated_test.png" alt="estimated_test"></p> <p>Blue line shows estimated TTFs. Then, we estimated where the test set is. Orange line shows MAEs between public LB scores for the constant values 0-10 (shown in <a href="https://www.kaggle.com/c/LANL-Earthquake-Prediction/discussion/91583#latest-535268">this discussion</a>) and scores for them calculated in public LB size segments. (These inferences are based on the assumption that the test set is made from one large segments and each instance is adjacent to another instance. We confirmed the test set has no overlap by brute-force matching) Red lines show estimated starting point of Public LB, starting and ending point of Private LB. Surprisingly, it almost matched the TTF expected by <a href="https://www.kaggle.com/c/LANL-Earthquake-Prediction/discussion/91583#latest-535268"></a><a href="/mykper">@mykper</a>.</p> <p>Finally, we calculated the median of peak values weighted by the length of the quakes in test set and make the prediction as follows.</p> <p><code> pred = np.where(pred_ttf &amp;lt; 4, pred_ttf, weithed_median - pred_tsf) </code></p> <p>We think predicting large part of TSF is difficult and predicting small part of TTF is relatively easy. So we use raw TTF if it is smaller than 4 and median - TSF otherwise. </p> <p>Post-processed oof prediction to training set: <img src="https://storage.googleapis.com/kaggle-forum-message-attachments/544429/13412/postprocessed_valid.png" alt="postprocessed_valid"></p> <p>What I regret is that it was late to find this and it did not come up with a clever use. I am so surprised by the interesting use of it in the solution of other participants.</p> <p>Our final submission is scored 2.10809 in Public LB (2.40159 in Private LB). I want to thank my teammates for agreeing to choose such a risky submission.</p> <p>That's all. Thank you for everyone.</p>
LANL Earthquake Prediction
1st place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LANL Earthquake Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>That really depends on the type of problem and type of model. Sometimes models will still generalize better if you give them diverser training examples and aligning training and test sets can be really overfitty. So can't give a clear answer here.</p> <p>Thanks a lot to the hosts of this competition and congratz to all participants and of course to my amazing teammates.</p> <p>What made this competition tricky was to find a proper CV setup that you believe in as the public LB gave bad feedback for private LB. This was my first competition where this was the case and it took me a while to completely ignore public LB, but it was necessary.</p> <p>I will now try to summarize some of the main points that helped us to win this competition. I am posting these elaborations in the we-form as we are a team and everyone contributed ideas and knowledge. Special thanks to <a href="/ilu000">@ilu000</a> <a href="/dott1718">@dott1718</a> <a href="/returnofsputnik">@returnofsputnik</a> <a href="/dkaraflos">@dkaraflos</a> <a href="/pukkinming">@pukkinming</a> who worked hard the last few weeks on the comp.</p> <p><strong>Acoustic signal manipulation and features</strong></p> <p>As has been discussed in the forums and shown by adversarial validation, the signal had a certain time-trend that caused some issues specifically on mean and quantile based features. To partly overcome this, we added a constant noise to each 150k segment (both in train and test) by calculating <code>np.random.normal(0, 0.5, 150_000)</code>. Additionally, after noise addition, we subtracted the median of the segment. </p> <p>Our features are then calculated on this manipulated signal. We mostly focused on similar features as most participants in this competition, namely finding peaks and volatility of the signal. One of our best final LGB model only used four features: (i) number of peaks of at least support 2 on the denoised signal, (ii) 20% percentile on std of rolling window of size 50, (iii) 4th and (iv) 18th Mel-frequency cepstral coefficients mean. We sometimes used a few more features (like for the NN, see below) but they are usually very similar. Those 4 are decently uncorrelated between themselves, and add good diversity. For each feature we always only considered it if it has a p-value &gt;0.05 on a KS statistic of train vs test.</p> <p><strong>Differences between train and test features</strong></p> <p>After doing abovementioned signal manipulation, we had more trust in our calculated features and could focus on better studying differences between train and test data feature distributions. We found that the test data should look different to training data in a few ways when comparing features by e.g., applying KS statistics between train and test. That’s when we decided to sample the train data to make it look more like we expect test data to look like (only from looking at feature distributions). We started by manually upsampling certain areas of train data, but gave up on that after a few tries and then we found a very nice way of aligning train and test data.</p> <p>So what we did is that we calculated a handful of features for train and test and tried to find a good subset of full earth-quakes in train, so that the overall feature distributions are similar to those of the full test data. We did this by sampling 10 full earthquakes multiple times (up to 10k times) on train, and comparing the average KS statistic of all selected features on the sampled earthquakes to the feature dists in full test. A visualization for this looks like this (this is a limited visualization and not necessarily the one we chose to make our final selection of EQs):</p> <p><img src="https://i.imgur.com/9evXuhV.png" alt="KS statistic train subsample vs. test"></p> <p>The x-axis is the average target of the selected EQs in train and the y-axis is the KS statistic on a bunch of features comparing the distribution of that feature for the selected EQs vs the full test data. We can see that the best average KS-statistic is somewhere in the range of 6.2-6.5. You can also see nicely here that a problematic feature like the green one deviates clearly from the rest, this would be a feature we would not select in the end.</p> <p>After careful examination of these results, we decided in the end to subsample the train data to only consider earthquakes [2, 7, 0, 4, 11, 13, 9, 1, 14, 10] numerating all 17 earthquake cycles we have in train. The mean of this sample is 6.258 and the median is 6.031.</p> <p><strong>CV</strong></p> <p>Now that we had sampled train data that we though to be similar to test just purely based on statistical analysis, and now that we had features that should not have any time leaks, we decided on doing a simple shuffled 3-fold on that data. Higher fold results are similar. We now tried to improve this CV as well as possible. </p> <p><strong>Models</strong></p> <p>Our final submit is a hillclimber blend of three types of models: (i) LGB, (ii) SVR, (iii) NN. The overall CV score on this was ~1.83. The LGB is using a fair loss with relatively moderate other hyperparameters. The SVR is also quite simply set-up. The NN is a bit more complicated with a few layers on top of a bunch of features. The real interesting thing here is that we do multi-task learning by specifying additional losses next to the ttf loss that we weight higher than the others. We have one additional binary logloss with the target specifying if the ttf is &lt;0.5 and one further MAE loss on the target of time-since-failure. This helped to balance some of the predictions out a bit and specifically helped to better predict some of the areas at the end of earthquakes that make some weird spikes. The NN had the best single MAE, but blending improved. Actually, just blending LGB and NN would have produced the best private LB score (2.25909). Adding SVR did improve CV though.</p> <p>With all the steps described above, we also managed to make the distribution of test predictions very similar tho oof predictions. The following image shows for a single LGB the oof (blue) vs. test predictions (orange). The KS-test between those two does not reject the null hypothesis of them being equally distributed.</p> <p><img src="https://i.imgur.com/bspTxd4.png" alt="LGB oof (blue) vs. test (orange) prediction dist"></p> <p><strong>Ideas that have potential</strong></p> <p>We had quite some ideas that have potential but did not make it into our final submission. One area is to better use the time-since-failure prediction, which we used only as an additional loss in our NN. Modeling tsf works better than ttf. It can help to manually adjust a few predictions which have large discrepancies between tsf and ttf predictions, like the end of EQs. Also, they can be a reasonable proxy for predicting the approximate length of the EQ. So for example, we had one model that normalized the ttf targets to be in range 0-1 and then predicts this normalized target and scales it by ttf+tsf prediction. This was usually very close to our simpler models so we did not tune it extensively, I just feel that this has further potential. </p> <p>The following kernel runs a LGB model on most of what I explained above and also would score 1st place with 2.279 private MAE:</p> <p><a href="https://www.kaggle.com/ilu000/1-private-lb-kernel-lanl-lgbm/">https://www.kaggle.com/ilu000/1-private-lb-kernel-lanl-lgbm/</a></p> <p>The following kernel runs a blend between LGB and NN scoring 2.25993 on private LB:</p> <p><a href="https://www.kaggle.com/dkaraflos/1-geomean-nn-and-6featlgbm-2-259-private-lb">https://www.kaggle.com/dkaraflos/1-geomean-nn-and-6featlgbm-2-259-private-lb</a></p> <p>No further augmenting did not help because you can only augment certain areas. We also only focused on KS statistic, I see no necessity to use something more complex for this data, Occams razor.</p> <p>The problem still is that you need to fix the signal in some way to make certain features more consistent, like we did with the noise and median subtraction.</p> <p>Yeah, for the second submit we basically added one more shorter EQ to the train data, as we got a bit of cold feet with the mean being so high to have a backup. Logically we should have taken an even higher mean. Actually the mean of our second submit prediction is only ~5.9, but as already said, would have also made first place.</p> <p>In training data we have 17 earthquakes. We sampled 10_000 times 10 earthquakes out of that (you could also just try all possible combinations) and compared how the feature distributions of those samples of 10 eqs map to the feature distributions in test. We used KS statistic as an estimate for the distribution similarity and finally picked those 10 eqs as training sample where the average KS statistic was the lowest. Does that make it more clear? </p> <p>Yeah, I use the scipy function. The p-value just gives you and indicator whether the null hypothesis of those two distributions being equal can be rejected. So in this case looking at the statistic or the p-value is kinda similar, we focused on the statistic here.</p> <p>Each dot in that picture: (1) sample a certain number of earthquakes from train, (2) calculate KS statistic for each feature between sampled train and full test, (3) draw KS value on plot. Best sampling is based on lowest average KS statistic across features. </p> <p>To your first point: exactly that's how we did it. 10 EQs is close to how many EQs we expected in test, but kind of arbitrary. We found to have good agreement between train and test feature distributions choosing 10, but you can get to similar results choosing a different amount of EQs. To your second point: yes we only considered features with higher p-value. We also used a bunch of feature for the sampling procedure explained. </p> <p>This is elaborated in the comments below.</p>
LANL Earthquake Prediction
22nd place solution described (molaee)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LANL Earthquake Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Its my pleasure to share the ad-hoc method I used in this competition with you. I am not that expert in ML, however I hope that my method opens some fruitful discussions for all of us.</p> <p>My kernel was forked initially from here <a href="https://www.kaggle.com/artgor/even-more-features">here</a> (Thank you <a href="/artgor">@artgor</a> )</p> <p><strong>Feature extraction:</strong> After running the kernel, I realized that the TTF is not estimated very well in the vicinity of failure times (it becomes saturated). I worked a lot on feature extraction to overcome this issue. My focus was mostly on time-frequency and MFCC related features and could increase the Pearson correlation from 0.65 (in the original kernel) to 0.68 (when measured on the entire data). The pearson correlation increased from ~0.25 to 0.38 when measured on the data in the vicinity of the failure times (&lt;3.2 s). My best feature: The area under 0.2 quantile of the cumulative STFT (presented in log scale) of the audio signal (NFFT=128, signal de-trended). After a couple of days, I had about 1300 similar features!! -&gt; LB public score improved 0.1 from ~1.5 to 1.4</p> <p><strong>Benefit from the leak:</strong> I became aware of the leak only 2 days before the submission deadline. I did not carefully analyzed the leak data, nor did match the mean of predicted TTF with the one presented in the paper. I just, removed the 5th and 6th EQ data from the training set. Visually inspection of the test and train data, I had the feeling that after removing these two EQs, test and train data become very similar to each other.</p> <p><strong>Model:</strong> I got the lgb model presented in the forked Kernel. Then created 3 categories of data set: (A) entire data excluding the 2 above mentioned EQs, (B) data very close to failures (3.2 s and less) , (C) data far from failures (6 s and more) For each data set, a separate lgb model was trained. The model corresponding to (A) had a mastering role. If its prediction was below a threshold (3 s) or above a threshold (7 s) then its output was being merged with that of (B) or (C) in a linear manner.</p> <p><code> for ind, row in X_test.iterrows(): if prediction_lgb_m[ind] &amp;lt; thr_l: alpha = (thr_l - prediction_lgb_m[ind])/thr_l prediction_lgb[ind] = (alpha)*prediction_lgb_short[ind] + (1-alpha)*prediction_lgb_m[ind] elif prediction_lgb_m[ind] &amp;gt; thr_h: prediction_lgb[ind] = prediction_lgb_long[ind] else: prediction_lgb[ind] = prediction_lgb_m[ind] plt.hist([prediction_lgb,prediction_lgb_m],50) </code> TTF histogram over test data using model (A) [orange] and Combined modes (A,B,C) [Blue]: <img src="https://storage.googleapis.com/kaggle-forum-message-attachments/547591/13437/Capture.PNG" alt=""></p> <p><strong>Topics for discussions:</strong></p> <p>1) Does it make sense to have a cascade of lgb models as I described? In my opinion, this method should add another high-level branch to the tree of the original model (A). But I could not get the same results when I increased the number of layers or leafs in (A) !!! Indeed, when I used models (B) and (C) I could get pretty lower/higher TTF values for the training data close/far to the failure time. The model (A) could not provide such results.</p> <p>2) After one month working on feature extraction, I had about 2300 similar and correlated features. Compared to my original 1300 features, I could obtain a bit better CV values (2.01 --&gt; 1.99) and I could see less fluctuations over (y - ypred) training data set. However, LB public score became a bit worse (1.399 --&gt; 1.43) !! As if I had over fitted. Right? What is the general procedure to reduce the number of features from 2300 to a lower value?</p> <p>3) I tried to reduce the number of features by removing highly correlated features (0.95 and more) from the feature list. This reduced the number of features to about 800. However, CV increased (+0.01), as well as LB public (+0.03).</p> <p>Finally, I decided to use my entire 2300 features, 3 LGBs, and the removed EQs 5 and 6 from the training set.</p> <p><strong>What did not work:</strong> 1) PCA over the features increased the LB public score. So I did not follow it. 2) Combination of LGB with other methods was not really a big success. I preferred not to step in that way.</p> <p>FYI: matching the mean of TTF of the test data to the one presented in the paper, could result 2.32 (LB private) </p>
LANL Earthquake Prediction
24th place single model (didn't submit)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LANL Earthquake Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><a href="https://www.kaggle.com/cuteliudong/rolling-rnn">https://www.kaggle.com/cuteliudong/rolling-rnn</a></p> <p>I have really learnt a lot, thank you very much</p>
LANL Earthquake Prediction
27th place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LANL Earthquake Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>27th place Solution</h1> <p>I show the 27th solution of the LANL Earthquake Prediction competition in this article.</p> <p>The main reason I write this article is because I wanted to share the findings I got from participating in the competition.</p> <h2>0. Bad News</h2> <p>First of all I did not know the discussion of <a href="https://www.kaggle.com/c/LANL-Earthquake-Prediction/discussion/90664#latest-535844">Are data from p4677?</a>..</p> <p>If I read this article, I would skip some steps and just perform 4. Make Submission File.</p> <h2>1. Create and Explore the features</h2> <p>I referred to other kernels and created some(11,226) features for LGBM.</p> <p>I standardized the features with the standard deviation and examined the relationship with the target TTF, then I obtained several findings.</p> <ul> <li><p>Many of the features have a peak at around TTF ~ 0.3[sec], and the value decreases rapidly to TTF: 0 ~ 0.3 [sec], and the value decreases gently at TTF &gt; 0.3[sec] (Fig. 1(left)).</p></li> <li><p>Found the layered structure for each segment at TTF &gt; 4[sec] (Fig. 1(left)).</p></li> </ul> <p>From these findings, I thought that many features depend more strongly on the ratio of elapsed time until the next earthquake occurs than the relationship with TTF.</p> <p>So, I defined the value TTFp(= TTF/(TTF + TSF)) as the ratio of the elapsed time until the next earthquake occurs. Then I obtained clearer results between TTFp and the values of features than the relationship with TTF (Fig. 1 (right)).</p> <p>|<img src="https://i.imgur.com/jTAWQmE.png" alt="Imgur">| |:--:| |Fig.1 (left)The relationship between TTF and the value of a feature. (right)The relationship between TTFp(= TTF/(TTF + TSF)) and the value of a feature.|</p> <p>However, on the other hand, I can't found the features that related strongly the seismic intervals (= TTF+TSF).</p> <p>From this, I gave up calculating accurate TTF and focused on accurate TTFp prediction and private earthquake interval prediction.</p> <p>|<img src="https://i.imgur.com/nMFdiKy.png" alt="Imgur">| |:--:| |Fig. 2 Correlation diagram of earthquake interval and the value of certain feature (Calculate the moving standard deviation for the high frequency component of the signal, and calculate its median) for data with TTFp of 0.6 or more and 0.7 or less. Although the value of feature that has the highest correlation with the earthquake interval, it was insufficient to predict the accurate earthquake interval.|</p> <h2>2. CNN-model</h2> <p>I used the CNN-model for accurate TTFp prediction.</p> <p>I prepared the reduced data($150 \times 37$ch) which pre-processed(Raw, Rolling-mean, Wavelet-denoise, e.t.c.) the waveform (150,000 data) and binned every 1000 data to calculate statistics(percentile, std, count, e.t.c.) as the input data.</p> <p>I structure the model as shown in the Fig. 3, and select Adabound (lr = 1e-4, final_lr = 0.1) as the optimizer.</p> <p>|<img src="https://i.imgur.com/hvk3Ci9.png" alt="Imgur">| |:--:| |Fig. 3 Outline of CNN-model|</p> <p>I obtained MAE for TTFp: 0.112(OOF) with this CNN-model (Fig. 4).</p> <p>|<img src="https://i.imgur.com/cfHo3fS.png" alt="Imgur">| |:--:| |Fig. 4 (Upside: orange) TTFp prediction with CNN-model(OOF), (Upside: blue) Target TTFp, (Downside: blue) Residual between prediction and target. |</p> <p>Examined the relationship between the target TTFp and the predicted TTFp, it was found that the accuracy of predicted TTFp at a segment with a long seismic interval became worse on TTFp:0.5~0.9 (Fig. 5).</p> <p>|<img src="https://i.imgur.com/kHw1RcV.png" alt="Imgur">| |:--:| |Fig. 5 (Upper left) Correlation diagram of predicted TTFp with CNN-model and target TTFp. (Lower left) Correlation diagram of predicted TTFp with CNN-model and target TTFp at a segment with a short seismic interval. (Lower right) Correlation diagram of predicted TTFp with CNN-model and target TTFp at a segment with a long seismic interval. The accuracy is not so good at a segment with a long seismic interval on TTFp:0.5~0.9.|</p> <h2>3. Prediction Private data</h2> <p>I predicted the earthquake interval of the private data by comparing the distribution of TTFp predicted for each segment with the distribution of TTFp predicted by CNN-model for test data (Fig. 6).</p> <p>This prediction was an unnecessary action if I look at the argument <a href="https://www.kaggle.com/c/LANL-Earthquake-Prediction/discussion/90664#latest-535844">Are data from p4677?</a>, but I think that I could show that you could predict Private data to some extent without citing the dissertation.</p> <p>|<img src="https://i.imgur.com/5fdKU2v.png" alt="Imgur">| |:--:| |Fig 6. Comparison of TTFp frequency of Test data prediction and TTFp frequency of OOF prediction for each segment. (Upper left)&lt;8sec segment, (Upper mid)~9sec segment, (Upper right)~12sec segment, (Lower left)~14sec segment, (Lower mid)&gt;16sec segment, (Lower right)Mixing some Training data segment to fit the distribution of test data. From these results, in the test (private) data, it was estimated that one ~9[sec] segment, one ~12[sec] segment, five ~14[sec] segments, and one &gt;16[sec] segment.|</p> <h2>4. Make Submission File</h2> <p>Since most of the earthquake interval of the segment included in Private is about 14 seconds and short and at least is about 10 seconds, I decided to exclude the segment with short time interval (7to9 sec) from Training data.</p> <p>Furthermore, regardless of the model, the predicted maximum TTF was likely to affect the average of the seismic intervals of the segments included in the training data.</p> <p>Therefore, learning was performed so that the ratio of the time interval of segments included in Test data and the ratio of the interval of segments included in Training data are the same.</p> <p>The learning performed in two cases using three segments and four segments.</p> <p>The model used LGBM instead of CNN, as training data were reduced.</p> <p>|<img src="https://i.imgur.com/4Yk4bI9.png" alt="Imgur">| |:---:| |Fig 7. The OOF result of learning with four segment. (Upside: orange) TTF prediction with LGBM(OOF), (Upside: blue) Target TTF, (Downside: blue) Residual between prediction and target.|</p> <p>I selected the prediction obtained by these LGBMs as the final submission file.</p> <p>|Selection|Private Score|Public Score| |:--:|:--:|:--:| |3 segments| 2.48137| 2.24303| |4 segments| 2.41465| 1.89035|</p> <p>I didn't select other submissions as the final submission files, although the public score was better than the final submission files, but it was not reliable because it did not estimate Private data.</p>
LANL Earthquake Prediction
2nd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LANL Earthquake Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you! Seems I have a strong will and not moved by the Public Leaderboard score. No, I didn't mix with other gradient-boosting results; I will do so next time because I would regret if I did not do so and lost by a score like 0.001. I did not expect such small differences in the prize zone. No reason using Catboost, it was simply the first gradient boosting library I saw in a public kernel. 🐣</p> <p>PS: FFT features are not ranking high in my feature importance, and that is also so in one of the organizers' random-forest paper, so I though ​FFT features are not effective (which is probably not true)</p> <p>Umm, that's difficult to answer. I felt there aren'​t much information we can extract from the acustic ​data and I felt I have more than enough features to get all the information. This is intuitive, not logical. I did some trial and errors trying more percentiles, switching std_truncated/std_nopeak, but those were manual and not systematic. For FFT, I don't think real, imag, phase are translational invariant; that is, depends on where you choose time 0, so I only use magnitude (abs), but percentiles of abs are reasonable; simply, I forgot trying that. Short answer: intution ​and some trial and error 😅</p> <p>Update: Feature importance label fixed (5th June GMT 3 am) </p> <p>I select segments from the traning ​​data and create a private-set-like data based on the inter-earthquake times seen in the figure "Are data from p4677?"</p> <p>mykper: <a href="https://www.kaggle.com/c/LANL-Earthquake-Prediction/discussion/90664#latest-535844">https://www.kaggle.com/c/LANL-Earthquake-Prediction/discussion/90664#latest-535844</a></p> <p><img src="https://junkoda.github.io/figs/quake/quake_final_private_like.png" alt="quake_private_like_set"></p> <p>The dashed line is 11.5 sec.</p> <p>The first segment is counted twice; the private set has 8 segments. The last segment is created from the 14-sec segment and shifted up by 2 seconds, and the last 4.5 second is cut, treating that as the public part.</p> <p>I use 32 features:</p> <ul> <li>standard deviation (std)</li> <li>std_nopeak: Std of data that are not part of peaks</li> <li>kurtosis_truncated: Kurtosis of data with abs(v - mean(v)) &lt; 20</li> <li>7 peak counts: Number of peaks with hight &gt; 50, 75, ..., 200</li> <li>5 percentiles: 95 percentile - 5 percentile, 80 - 20, 70 - 30, 60 - 40.</li> <li>trend: slope of robust linear regression to 30 sub chunks of std_truncated</li> <li>trend_error: Abs difference in the slope of RANSAC and Huber fit</li> <li>power spectrum; Fast-Fourier Transform the data and average the absolute value in 15 bins</li> </ul> <p>I choose combinations such as 95 percentile - 5 percentile to avoid direct dependence on the mean; which is drifting with time. Same for the peak height; the peak height is defined as (max - min)/2.</p> <p>The std_truncated (std instead of kurtosis in kurtosis_truncated) works almost as well as std_nopeak.</p> <p>I randomly select 2000x1000 training chunks of length 150_000 from my private-like set, which is 1000 times the number of independent/non-overlapping chunks and put all of them into CatBoost. I do not provide CV data to the regressor; eveything ​is the training set.</p> <p>I also tried to predict time since failure using all the training set and tried to stitch together with time to failure, but I was not able to do that successfully.</p> <p>This is the feature importance:</p> <p><img src="https://junkoda.github.io/figs/quake/quake_feature_importance.png" alt="Feature importance"></p> <p>Thanks. Exaclty​, just added 2.* something to the time to failure.</p> <p>No. I have installed them..., but I mistakenly remember the deadline for 1 week and didn't have time for the stacking phase. Default parameter ​CatBoost single model 😅</p> <p>I fixed the feature importance ordering. FFT features (power spectrum power n) are working. I saw somewhere that low-frequency power spectrum is an intererting​ feature near the earthquake, and the high rank of power0 confirms that.</p>
LANL Earthquake Prediction
3rd place memo
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LANL Earthquake Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to organizers for hosting such an interesting competition, and congratulations to all the top teams.</p> <p>Already mentioned in the <a href="https://www.kaggle.com/c/LANL-Earthquake-Prediction">discussion</a> , we expected test from p4677. So we can roughly estimate the test ttf. Based on this information, We adjusted the ttf of train. Green line(I will call it best y) is 12.625.</p> <p>By training based on the above adjusted ttf, you can get a good private score. For example, RNN with only 6 features and simple GRU, you can get 2.33 on privateLB. Actually, lightgbm with many features is slightly better, so we mainly used it.</p> <p>How to find best y? For example, consider the situation: ・train data : 1~11 earthquakes of train ・test data : 12,13,14,15 earthquakes of train The best value can be calculated as follows.</p> <pre><code>max_ttf_list = [8.828100, 8.566000, 14.751800, 9.459500] chunk_length_list = [33988602, 32976890, 56791029, 36417529] best_score = None for y in np.linspace(5,15,1000): # search range error_list = [] segment_list = [] for max_ttf, chunk_length in zip(max_ttf_list, chunk_length_list): slope = max_ttf/chunk_length*150000 segument_num = chunk_length//150000 ttf_true = [max_ttf-slope*i for i in range(segument_num)] ttf_adjusted = [y-(y/segument_num)*i for i in range(segument_num)] error = abs(np.array(ttf_true)-np.array(ttf_adjusted)).sum() error_list.append(error) segment_list.append(segument_num) score = sum(error_list)/sum(segment_list) if best_score is None: best_score = score elif best_score &gt; score: best_score = score best_y = y print(best_y) </code></pre> <p>If training by default ttf, MAE of test data is 2.251. If training by best y and modified ttf, MAE of test data is 1.948.</p> <p>It may be said that this competition is a leak, but it was interesting to think about creating a better model on that premise.</p> <p>We would like to say thanks to everyone. That's all. Thank you for your reading.</p> <p>Thank you for your reply. It's cool. My code is lengthy...</p> <p>Thank you for your reply. To be honestly, we did not spend much time on tuning parameter. My teammate currypurin used <a href="https://github.com/pfnet/optuna">optuna</a> and shared with me this param.</p> <pre><code>param = { 'num_leaves': 37, 'objective': 'regression', 'max_depth':9, 'learning_rate': 0.01, 'boosting': 'gbdt', 'feature_fraction': 0.311473528266664, 'bagging_freq': 14, 'bagging_fraction': 0.4190138426177889, 'bagging_seed': 42, 'metric': 'mae', 'lambda_l1': 0.0003661365327691201, 'lambda_l2': 0.2855479018074398, 'verbosity': -1, 'nthread': -1, 'random_state': 0, 'min_data_in_leaf': 40 } </code></pre> <p>Usually , when data size is small and have many features, I start with conservative parameters. (And this is fast, you can spend much time with feature engineering and so on)</p> <pre><code>param = { "num_leaves": 20, "max_depth": 4, "bagging_freq": 1, "bagging_fraction": 0.7, "feature_fraction": 0.7, ... } </code></pre>
LANL Earthquake Prediction
5th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LANL Earthquake Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks! I just read the description of your solution and I'm surprised normalizing ttf didn't help in the end. But I also found through feature selection that the features that predict ttf well are not necessarily gonna be the best features to predict normalized ttf. I had to redo feature selection when I switched to normalized ttf.</p> <p>Hello kaggers!</p> <p>There have been many good solutions posted already, but I haven't yet seen some of the things I did, so I'd like to share some of my insights in this competition.</p> <ul> <li><p>First thing I realized is that no public kernel or my own features can distinguish between high and low quake intervals. Models usually settle for something that can be described as the mean quake interval multiplied by a factor between 0 and 1 with 1 being far from the next quake and 0 being right before the quake.</p></li> <li><p>Knowing this, I decided to eliminate the influence of the time between quakes on the target and instead predict that value between 0 and 1. I called it time fraction and it's <code>time_to_failure/time_between_quakes</code>. Then I just multiply the predicted values by the mean time between quakes (actually, a slightly lower value worked better, so I treated it as a hyper-parameter). This allowed my models to run longer before early stopping, converging to a better solution.</p></li> </ul> <p>*<em>Feature engineering *</em></p> <ul> <li><p>I used ~20 features in the end. Here is the picture of features' permutation importances: <img src="https://i.ibb.co/LYZGPF4/fig1.png" alt="permutation importances"></p></li> <li><p>Most of them involve some sort of signal filtering followed by some feature extraction. To filter the signal, I used Butterworth filters (<code>filtered_f1-f2_feature_name</code>), wavelet decompositions of various levels (<code>wavelet_name_decomposition_level_feature_name</code>), or removing large peaks. My most common features were histogram-based. Basically, it's the summed values at the peak or the tail of the histogram. Some other features are peak-based or from public kernels.</p></li> <li><p>I did extensive feature selection, which improved my result significantly. I basically tried all possible wavelets in the pywt library with histogram-based features and did a stage-wise forward feature selection on them. To speed up feature selection, I used a simple SVR with a gaussian kernel instead of my final model.</p> <p><strong>Model</strong> I used a simple feed-forward neural net in pytorch, I found that it works much better with my features than LGBM. I added some bells and whistles to it, like adjusting learning rate on plateaus, batch normalization, adding 0 mean, 0.05 std gaussian noise to the training batches, training several models on each fold and picking the best one to avoid unlucky random weights initialization.</p> <p><strong>Cross-validation</strong> I loaded training data on a per-quake basis, so there was no overlap between quakes. I didn't test anything else, it just made sense to me to do it that way. I did 15-fold quake-based CV uniting the first and the last bits to the neighboring quake. So, my folds were [0,1], 2, 3, 4, 5, 6, 7, 8, 9 10, 11, 12, 13, 14, [15, 16]. Also, I extracted features from 150,000-point chunks with 25,000 shifts, which resulted in a significant overlap between adjacent chunks, but it worked fine in my NN model.</p> <p><strong>Scaling</strong> I identified the best scaling factor for the train set (by treating it as a hyper-parameter), then adjusted it based on the info about the test data (upscaled it a bit). I used 2 coefficients: ~1.05 and ~1.13. The first one gave me a better public LB and it correlated well with my CV, so I used it to check my model. The way I came up with 1.05 is pretty random and makes little sense, but this is the multiplier that brings the mean of my CV predictions to the target mean ttf in the train set. The second coefficient is based on what I expected the mean test set ttf to be.</p></li> </ul> <p>Overall, it was a fun but stressful ride. I think predicting time fraction instead of ttf could be useful to other winning models here and it could be a better target to predict in real life, as it indicates the relative state of the system (kind of like a danger level).</p> <p>Finally, here is my CV predictions plotted: <img src="https://i.ibb.co/HXmdGqH/fig2.png" alt="CV predcitions"></p> <p>UPDATE: Just adjusted my best submission's mean from ~5.7 to 6.32 and my score improved to 2.24127 :O Oh well, this is the lottery we all played.</p>
LANL Earthquake Prediction
87th place private and 1st place public solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LANL Earthquake Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I want to thank Kaggle and the great community of this competition participants for making this journey possible! This is a write-up of our work with my teammate <a href="/idog90">@idog90</a>. </p> <p>First, please check out <a href="https://medium.com/@zaharchikishev/how-to-lb-probe-on-kaggle-c0aa21458bfe?source=friends_link&amp;sk=543b4ebcafee24697979b69efdb35adf">the LB probing strategy</a> that gave us the public leader-board first place. In this post I will describe the other components of our solution, focusing on the submission that gave us 87th place.</p> <p><strong>features and models</strong> We generated 270 features, mostly taken from the public kernels with some modifications. By using recursive feature elimination with permutation importance by LGBM we reduced the feature set to 90 features. Regarding the segment mean, we subtracted the mean from the signal before calculating the features, but left it as an additional feature because it improved both CV and LB. Leaving it was possibly a critical mistake. Models that participated are LGBM, XGB, CatBoost, SVR and shallow NN by fastai. Additionally we added a median of classification output of LGBM, by dividing the targets space into 11 classes. The 6 models oof predictions were blended together by linear stacking with MAE objective function and non-negativity constraints. Using 6 models and stacking gave more or less similar scores to the best single model scores (XGB or CatBoost), so this setup is probably redundant but no harm.</p> <p><strong>CV strategy</strong> We used 25k stride to generate train segments, and then assigned every 35 consecutive segment into a group of its own. Then we removed the segments with data overlapping into the next group, and used stratified group 5-fold CV strategy with the groups as defined above. I think it worked well for us. </p> <p><strong>secret weapon</strong> We used two extra elements that gave us the medal. The second final submission didn't include it and scored somewhere around 500 on the private.</p> <p>a. First, we merged short quakes 5 and 6 together, considering them as a single long quake. This is of course to make the train distribution to be more similar to the test. This is the change that made the difference.</p> <p>b. Second, we added public test segments to the train with their best estimated scores from the LB probing strategy linked above. I don't think it had much impact, it was less than 2% of the train data, and noisy. </p> <p><strong>using MIP solution</strong> As also mentioned in the medium article linked above, our mixed integer programming (MIP) solution for the LB probing had a nice ability to calculate public LB scores of potential submissions with great precision (0.02). In a sense, the MIP model learnt very well the subspace of all "normal" submissions and therefore gave excellent estimations for anything in that subspace. We used it to estimate public LB scores for our final submissions candidates on the last day.</p> <p><strong>what didn't work</strong> Huge amount of things that we tried didn't work well. To name the major ones:</p> <p>a. <a href="https://www.kaggle.com/c/LANL-Earthquake-Prediction/discussion/94435#latest-543574">batch start identification</a> didn't help b. transformer NNs on raw data gave bad CV and LB scores c. spectograms with CNN was OK but worse than our best models and we didn't include it at the end. e. reweighting of the train set into the known test distribution, fitting histograms of the classification models to match the test distribution, - didn't work well on test, still an enigma for me. f. genetic programming, - but we are not experts, tried only a simple configuration g. more complex stacking, - seems to overfit</p> <p>Thanks for reading that far, and hopefully see you in the future competitions!</p> <p>thank you! We did quite well, after all. It was so easy to drop into no-medals zone. </p> <p>I used CPLEX for 2 years while working at IBM research, and love it. My major is also in optimization, so I love to use these tools.</p> <p>Congratulations on your results, hopefully we will compete once again. </p>
LANL Earthquake Prediction
8th solution. XGBoost and Dataset balancing
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LANL Earthquake Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Many thanks organizers for such interesting competition.</p> <p>And thanks authors for their very useful kernels: <a href="https://www.kaggle.com/artgor/earthquakes-fe-more-features-and-samples">https://www.kaggle.com/artgor/earthquakes-fe-more-features-and-samples</a> <a href="https://www.kaggle.com/gpreda/lanl-earthquake-eda-and-prediction">https://www.kaggle.com/gpreda/lanl-earthquake-eda-and-prediction</a> <a href="https://www.kaggle.com/allunia/shaking-earth">https://www.kaggle.com/allunia/shaking-earth</a> <a href="https://www.kaggle.com/vettejeep/masters-final-project-model-lb-1-392">https://www.kaggle.com/vettejeep/masters-final-project-model-lb-1-392</a></p> <p>I try different models: "time series models"- WaveNet, LSTM, ResNet-1d, FCN-1d. "Features models" - XGBRegressor, DNN, SVR. And XGBoost model was the best. This is hyper parameters: model = xgb.XGBRegressor(booster='dart', tree_method='hist', n_estimators=100000, learning_rate=0.01, max_depth=3, subsample=0.9, colsample_bytree=0.5, reg_lambda=1, gamma = 1)</p> <p>And I noticed that the original training data set is unbalanced - there is few data with a time to failure more than 8 seconds. Therefore, when creating my own dataset for thrain the model, I used data from 2, 7, 14 "long" quakes "more" than others, and used 4th quake for validation. This improved the prediction for longer times to failure. </p> <p>Thank you too, congratulations! I think you have the most robust model of all the participants, because in the public and private leaderboard you are in the top. ))</p>
LANL Earthquake Prediction
9th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LANL Earthquake Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>After discovery of possible (at that point) p4677 related leak, I decided to drop from competition. Heading into last week of competition I had clear idea how I would exploit it. This post <a href="https://www.kaggle.com/c/LANL-Earthquake-Prediction/discussion/94086">Will this decide who wins?</a> and the fact I had nothing to lose (dropped to ~1K public place) if assumption about test set distribution was wrong I decided to implement the idea.</p> <p>I wanted to use Leave K-EQ out type of CV scheme, but thanks to <a href="/cpmpml">@cpmpml</a> we know that difference between mean of validation and train could lead to under/overfit, so if mean of all EQ is the same, I think, it might work.</p> <p>I transformed target to normalized time to failure multiplied by twice the estimated mean of private set. I had 2 values (2 submissions): 6.3 and 5.9. In hindsight 5.9 was way off, but my estimation process was quiet crude. Late submission with higher value have slightly better score.</p> <p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/544535/13414/target.png" alt="target"></p> <p>I chose LGBM, as I had been using it prior to leak discovery. Features were peaked from public kernel <a href="https://www.kaggle.com/artgor/even-more-features">"Even more features"</a> by <a href="/artgor">@artgor</a>, according to past models feature importance and <a href="https://www.kaggle.com/c/LANL-Earthquake-Prediction/discussion/93148">"My Top 30 Features"</a> by <a href="/scirpus">@scirpus</a> <code> num_peaks_10, num_crossing_0, percentile_roll_std_5_window_100, abs_percentile_80, fftr_percentile_roll_std_80_window_10000 </code> Folds were generated using this code: <code>for val_t in itertools.combinations(range(15), 2):</code> where <code>val_t</code> is two EQ indexes for validation. Only full EQ were used. All these were enough for 9th place.</p> <p>More interesting for me were ideas that didn't work. One of them was effort to predict not ttf, but a pair of period of EQ and normalized time to failure. I did try siamese network based on CNN1D with similarity metric: <code>np.exp(-np.abs(right_period - left_period)/c1) * np.exp(-np.abs(right_norm_ttf - left_norm_ttf)/c2)</code> constants <code>c1</code> and <code>c2</code> were to limit points that are close enough (hyperparameters). I did not fully explore this solution, but it was fun to work on. I did try other approaches but I strongly believe that period of EQ is not predictable.</p>
LANL Earthquake Prediction
Congrats, and my 5 min lottery tickets (81th place/silver medal)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LANL Earthquake Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First, congrats to all the winners, especially Zoo team and my previous great teammate Danijel! All of you have done an amazing job!</p> <p>This competition is such an unstable competition that I previously didn't plan to spend any time on it. About 2 hour before the deadline, I got a bit of free time and decided to buy 2 lottery tickets for this competition and spent 5 mins to make 2 submissions. </p> <p>Basically, what I did is just: 1) take the old public LB1.456 benchmark solution (shared 2 months ago in the kernel's section <a href="https://www.kaggle.com/gpreda/lanl-earthquake-eda-and-prediction">https://www.kaggle.com/gpreda/lanl-earthquake-eda-and-prediction</a>, credits to Gabriel); 2) multiply the prediction by 1.08 and 0.96 respectively, and made these 2 submissions. That's it. </p> <p>As a result, the 1.08 adjusted submission gave private LB score 2.45 and 81th place.</p> <p>It seems that the scale/magnitude of prediction in the private LB part (compared with train data and public LB data) might probably be the largest uncertainty in this competition (in terms of public/private LB ranking), and applying a scale-up and scale-down factors adjustment might increase the chance to finish in the very top of the LB (though also increase the risk of finishing in the very bottom of the LB when factor adjustment is too aggressive)</p> <p>Btw, I also found that if we simply multiply 1.1 with the PublicLB 1.50 catboost benchmark solution (shared one month ago in kernel <a href="https://www.kaggle.com/hsinwenchang/mfcc-randomforestregressor-catboostregressor/output">https://www.kaggle.com/hsinwenchang/mfcc-randomforestregressor-catboostregressor/output</a>), it will score 2.336 on the private LB (11th place) and gave a gold medal. Without any adjustment, the benchmark solution gave 2.45 private LB score.</p> <p>I guess that most of top 15 competitors might possibly win (or close to) top 1 place prize with some simple factor adjustment.</p> <p>Finally, I am in particular very interested in the top 1 solution, who seems to have a good winning margin on the private LB compared with all other teams. Good job!</p> <p>Based on previous forum sharing, I just thought that it is more likely that the private LB might have higher(rather than lower) mean than public LB, and I decided to apply a bit more aggressive adjustment when scaling up and less aggressive adjustment when scaling down. The specific values 1.08 and 0.96 are just two random numbers from my intuition (though did some simple math calculation to make sure the adjustment would not be out of the range of "making sense", and also double check it by looking at the public LB score). For example, an adjustment factor of 8.0 would definitely blow up the score and won't make any sense (which could also be verified by looking at the public LB).</p> <p>Also, I made the 1.08 adjustment submission first, which gave me 1.68 public LB score, which I think it is in the range of "makes sense". If I found that submission to have too bad public LB score (e.g., 2.0+ public LB score), then for my 2nd submission I probably would apply a less aggressive 1.x+ adjustment (e.g., 1.02) rather than 0.96 adjustment, since I do think that there is a higher chance that private LB would have a higher mean (based on forum discussions). But given that my first submission looks reasonable, I made the 2nd scale-down submission with factor 0.96 as planned.</p> <p>It is based on several simple thoughts:</p> <p>[1] The public LB score 1.45 is "reasonably ok". [2] The kernel solution was generated 2 months ago, at that time it focused more on the EDA part and didn't try every efforts to climb the public LB, and thus less prone to overfit to the public LB (compared with most recent kernels with slightly better public LB score) [3] I only have a couple of mins, and thus I just quickly take one benchmark solution that I "like" most at the first glance, rather than did a comprehensive search and comparison among all available "good benchmark". </p> <p>Btw, the same factor adjustment trick probably would work for most, if not all, "good public benchmarks". And the one I selected performs reasonably well but definitely not the best/optimal one. As I also mentioned in the main post, if I selected the public LB 1.50 kernel solution shared 1 month ago and applying the same adjustment, it will gave a gold medal.</p> <p>In short, I just did sth simple that I think might have some nontrivial chance to finish very well on the private LB (by scaling up and scaling down), and the rest is luck. That's why I called my two submissions as "lottery tickets".</p>
LANL Earthquake Prediction
My no-test-leak 23rd solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LANL Earthquake Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Given your late join, it is of course impossible for you to do everything, but I see you have much joy in this competition. Congrats on your impressive survival and leading the public LB.</p> <p>Thanks, and let me share the feelings of being dropped due to leak. Indeed, other transformations also work well, such as log(y+5). Gamma or huber regression is other types of regression that assumes the variance increases w.r.t target. They all work better than the nightmare MAE loss or MSE.</p> <p>Thanks Amjad <a href="/amjad85">@amjad85</a> and congrats your first gold medal. </p> <p>nice!</p> <p>Hello guys,</p> <p>After a day both agonising my lost solo gold medal, as well as enjoy my luck to survive the shake, I am now be able to do this mini write-up. </p> <p>Although I admitted I exploited the advantage of test leak to survive, my 23rd position DID NOT use any kind of test leak information. What I mean is that as the leak test estimates are known, it just made me less suspect my final submission with a very bad public LB (1.350) score as well as bad local MAE score, and select it as the "risky" submission. That submission does not use any kind of leak information.</p> <p><strong>Feature Engineering</strong> + The top feature is the 1st coefficient of numpy.polyfit of order 3 on the columns of STFT spectrogram. + The 2nd top feature is just the magnitude of a specific band of FFT frequency, after some smoothing techniques on the FFT spectrum. + Other weaker features are some welch coefficents, roll std, STFT contrast level, C3 unlinearity....</p> <p><strong>Models</strong> I used LGB, XGB, CatBoost, and RF. All 4 of them score almost identical in my CV as well as LB (around 1.31 to 1.33). A simple blend of them gave me 1.290 on public LB. Hyperparameters were tuned by random search and later by Gaussian optimization. LGB with 40 leaves, 20 min_data_in_leave, depth 7, subsample 0.5. XGB with depth 7, subsample 0.5. </p> <p><strong>CV</strong> This is the part that made me disappointed at the end. I was disappointed as I spent a lot of time building this trustworthy CV framework, but in the end to get a good score, one would not need it at all. However, this CV framework can serve me as a legacy for future competitions. - First, I bet so many people may wonder why using sqrt(y) gives better result than raw y. That's because the varied variance of predictions. Higher ttf samples are more unpredictable. So we need to focus more on low ttf samples, because they are achievable. If we use raw y, CV will overfit on high ttf samples, and underfit (not converged) in low ttf samples. </p> <ul> <li><p>After some careful CV iterations (which will be described next), I chose y^(0.55) to be the best transformation.</p></li> <li><p>Leave-one-quake-out (LOQO) or shuffled CV will give the same result, if the distributions of true train data and early stopping data is the same, with a SEPARATE validation data. As a result, I used 10 stratified folds based on 16 quake_id. For each fold train, I further use another CV of another 10 stratified folds of the training fold (which has 9/10 data of the whole original training set). So, in the end each innermost loop of training session consists of (9/10)*(9/10) samples from the whole original train set. The inner loop CV serves to get the best averaged iteration for each outer fold train.</p></li> <li><p>I train on 10 different seeds of the outer CV.</p></li> <li><p>It turns out that using shuffled CV in general gives better result than LOQO, because each sample is validated by 10 different subsets of seeds. In LOQO, each data sample is validated by just 1 set of seed, that is the combination of all the quakes other than the one containing that sample.</p></li> </ul> <p><strong>Stack</strong></p> <ul> <li><p>Optimizing MSE: Using a simple ridge regression sample on all OOF predictions of 4 models (each with 10 seeds, as said), I got my best submission which score 1.350 public LB and 2.409 private (23rd position). Please note that, this is my best CV only in terms of MSE. The stacked MAE is much worse than any of the 1st level model. This is a gamble that I made, given that the private set has significantly higher ttf samples. Without that knowledge, I would have not selected this submission. This is my "risky" submission, and turned out to be the saviour.</p></li> <li><p>Optimizing MAE: Ridge regression does not allow us to directly optimize MAE. So as usual, transform y to y^(0.55), and do MSE. By this, my stacked MAE is much better, which gives me 1.29x on public LB. This is my "safe" submission, and turned out to be the disaster.</p></li> </ul> <p><strong>What I regret</strong> I did the exact estimate of private LB based on the image of the p4677 paper. Then, I use the training weights obtained by getting the binned normalised histogram of private set divide the binned normalised histogram of the training set. Then I use just 1 seed split of 1 simple XGB model. That model gives 1.380 on public LB, and 2.375 on private (which could earn me a gold). If I blend more models with these weights, and finally stacking, I will end of in the top 3. I decided not to use this leak at the end, and I regret it. However, it's still lucky to me that my non-leak submission secured my 23rd position.</p> <p>Thanks for reading.</p> <p>My non leaked model has the same mean with the leaked one. I just did not want to gamble too much. </p>
LANL Earthquake Prediction
My one trick for feature selection/data augmentation (183rd place)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LANL Earthquake Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>With such a small amount of data (4194 segments), it is way too easy to overfit the cross validation and public LB. What we did to try to compensate for the small data size was to keep the segment length at 150,000, but train the models on 140,000 data points instead. For each segment a starting offset of 0, 2000, 4000, 6000, 8000, and 10000 was used to produce 6-times the training data to feed into the models, which also removes any spurious features that might happen to have some importance by chance. On the test data the 6 outputs are then averaged. </p> <p>This setup only marginally improves CV score by about 0.003, but it makes the feature importance for a whole lot of features zero or close to zero. Our best submission was actually a single lgbm model trained with this scheme, although we would have never picked that over other stacked or averaged results.</p>
LANL Earthquake Prediction
One Feature, No ML, Gold Medal Range
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LANL Earthquake Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I have created a <a href="https://www.kaggle.com/trentb/one-feature-no-ml-gold-medal-range">kernel</a> that demonstrates the ability to achieve a high score (private leaderboard: 2.33037; 9th place) in the LANL Earthquake Prediction competition with only <strong>one</strong> well-crafted feature, no machine learning, and peak TTF information from the test set data leak.</p> <p>Yes, this was submitted out of curiosity after the competition when I fully realized how important scaling to the test mean was.</p> <p>During the competition I submitted an unscaled-to-test-mean submission of the one feature, but I then abandoned the idea.</p>
LANL Earthquake Prediction
Our 35th place model ... (didn't submit)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LANL Earthquake Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2>The submission we didn't select and could have been 35th place.</h2> <p>While we are waiting for top solutions, I'd like to share with you our best model. Congratulations to <strong>the real 35th place</strong> ブギーな胸騒ぎ and everybody who survived the shakeup, and obviously congrats to The Zoo, CPMP - Bluetrain - Antoine, Jun Koda, CarlosPK, Ivan Batalov ...</p> <h1>🥇 👍</h1> <h3><a href="https://www.kaggle.com/jesucristo/singlexgb-2-42917-35th-late">SingleXGB - 2.42917 (35th late)</a></h3> <hr> <p>Please also check our approach:</p> <p><a href="https://www.kaggle.com/c/LANL-Earthquake-Prediction/discussion/94351#latest-542727">The last minute analysis that kept us in silver medal contention</a> <a href="https://www.kaggle.com/robikscube/lanl-simulating-the-test-set">LANL Simulating the Test Set</a></p> <p>by my teammate <a href="/robikscube">@robikscube</a> </p> <p><br> ![](<a href="https://media1.giphy.com/media/NshMpAYHfL4bK/giphy.gif">https://media1.giphy.com/media/NshMpAYHfL4bK/giphy.gif</a> =250x*)</p>
LANL Earthquake Prediction
Short summary for our approach (public 5th -> private 212th)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LANL Earthquake Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you for your advice! I'll try and do late submissions.</p> <p>I'm sorry for the lack of explanations.</p> <p>I mean we created the following 6 model:</p> <ul> <li>lgbm training rounds = 10000</li> <li>lgbm training rounds = 12000</li> <li>lgbm training rounds = 14000</li> <li>lgbm training rounds = 16000</li> <li>lgbm training rounds = 18000</li> <li>lgbm training rounds = 20000</li> </ul> <p>And finally we averaged predictions of 6 models. </p> <p>Single "lgbm training rounds = 10000" scored 1.263 in public LB, and "lgbm training rounds = 20000" scored 1.262. And averaged submission gave us 1.259.</p> <p>Congrats to all winners. Of course we are a little disappointed at the result, but we would like to share our approach. We believe our experience can be lessons for us, and also for some other people.</p> <h1>Summary</h1> <p>Our best public LB score 1.259 is provided by ensemble of 6 LightGBM models.</p> <ul> <li>Use features of “<a href="https://www.kaggle.com/vettejeep/masters-final-project-model-lb-1-392">Masters Final Project</a>” (Its size is over 800)</li> <li>Reduced the size up to 449 by: <ul><li>Kolmogorov–Smirnov test</li> <li>scipy.stats.pearsonr</li> <li>adversarial validation</li></ul></li> <li>Train LightGBM models with: <ul><li>gamma regression</li> <li>rounds different from 10000 to 20000 (separated by 2000)</li></ul></li> </ul> <h1>CV Strategy</h1> <ul> <li>KFold</li> <li>8 fold</li> <li>shuffle=True</li> </ul> <p>When shuffle=False, cv score in each fold fluctuates a lot <a href="https://www.kaggle.com/c/LANL-Earthquake-Prediction/discussion/89366">as olivier says</a>. We felt it’s not appropriate for good cv, so I decided to use shuffle=True.</p> <h1>Models</h1> <p>We tried {LightGBM, XGBoost, Catboost, MLP, RNN}. And LightGBM is the best model for us, the others couldn’t be better.</p> <h2>Hyperparams</h2> <p>Gamma regression is suitable for this competition. By just changing objective from <code>regression</code> to <code>gamma</code>, we can get great gain in public LB score. </p> <h1>Ensemble</h1> <p>We utilize “rounds averaging” for ensemble.</p> <p>Since we use the augmented features, early stopping doesn’t work properly. We can’t see what is the best round for LightGBM, so we take the average of 6 predictions generated from different rounds models (from 10000 to 20000 separately).</p> <h1>Submission Strategy</h1> <p>We prepared the following 2 submission:</p> <ul> <li>Almost the same model described above <ul><li>Only one difference is the number of rounds averaging. We use 11 model (from 10000 to 20000 separately)</li> <li>public: 1.260, private 2.49606</li></ul></li> <li>Feature dropped version <ul><li>We removed all features named ‘mean’ so that we can avoid overfit</li> <li>public: 1.273, private 2.49699</li></ul></li> </ul> <h1>What didn’t work for us</h1> <ul> <li>Convert targets to sqrt(targets) in order that targets follow normal distribution</li> <li>Remove some train data by adversarial validation</li> <li>Use meta feature (prediction value as a feature)</li> <li>Use dart as LightGBM hyperparams</li> <li>Stacking</li> <li>Seed averaging</li> </ul> <p>Anyway, I would like to thank every participants especially for my teammates. We would like to learn more from winner solutions!</p>
LANL Earthquake Prediction
Some elements of 7th place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LANL Earthquake Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks Kaggle for this interesting competition ! </p> <p>Here are a few points from our solution, at least my part, (I'll let my teammate describes their interesting ideas): -1 EQ out validation scheme -Data augmentation with 30k chunk size -Bases Models are simple lgb/xgb model with fixed number of trees, huber or gamma objective and sample weights -Calculated around 200 features based mainly on STFT (see image attached), there was a correlation between the energy on some frequency band, and the ttf. I created those features based on hann window with size 1000 and 5000 for different frequency bands (44khz - 60khz, 60khz - 136khz, 140khz - 212khz, 216khz - 356 khz) -Used different indicators on on those bands : mean / std / quantiles + some some simple features on raw signal like Numbers of peaks above threshold or quantiles. -One way to improve our CV score was to tweak the original to lower the ttf or very long EQ. I basically did a parallel shift of ttf between the start of long EQ to the mini-EQ.</p> <p>This solution was in top 15 on public leaderboard. Then the difficult part began when we realized that Private Distribution might be very different from Public LB and also from Train Data. So we developed different measures based on different possible test sets. We realized that the distribution of EQ type (short or long) and distribution of ttf had a significant impact on our CV score. We optimized 2 of those measures, with the hypothesis that distribution could be the one in the published paper. Public LB of those 2 subs were : 1.39 and 1.29. The best on private LB was the first one ;) I'm glad we were able to survive the shake-up, this was really not an easy task. </p> <p>Congrats to all of you who shared great content in the Kernels and in the Forum. Congratulation to the winning team ! </p> <p>And a special thanks to my teammate @cpmpml and @bluetrain who brought so many very interesting ideas that I hope they will share with you soon ! Great team work !</p>
LANL Earthquake Prediction
That One Sure Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LANL Earthquake Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Every one is looking forward to solution of ABC. i guess only @cpmpml knew what he is doing </p> <p>And congratulations to you too. Your mean and median thing was right I guess, I am curious to know how did you come up with that.</p> <p>Yes, And of course you. Looking forward to your solution.</p>
LANL Earthquake Prediction
Unconventional gold (16th place solution)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LANL Earthquake Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>To begin with, I want to congratulate all competitors regardless of leaderboard ranking for what has sure been a lot of work this months. In many cases it may feel to be unrewarded. Don't feel despair, just make sure to learn as much as possible including final wrapups and it will have been worth it. </p> <p>So, here are my hints, hoping that will be useful for someone:</p> <h3>GENERAL IDEAS:</h3> <ul> <li><p>Assessing the amount of noise in a competition is the key to generalization. (And it is never easy)</p></li> <li><p>You can win without stacking, and even without a too complex model, as long as you frame the problem correctly. </p></li> <li><p>Don't be trapped by the Public Leaderboard race, just treat it as another fold of your validation or -sometimes- ignore it altogether</p></li> </ul> <h3>MY -CONDENSED- PATH TO THIS GOLD:</h3> <ul> <li><p>First month: lost time trying to make a RNN work in a thousand of creative ways. After lots of work had to bite the bullet: RNN was not the way to go. <strong>Difficult decision</strong>: Start from scratch.</p></li> <li><p>Second month: The fight to find a proper validation scheme, the realization of just how overfittable the dataset was. Lots of feature engineering, most of it the kind already seen in forums, only "original" ones some computationally expensive Matrix Profile stats that turned out not to be too useful. <strong>Difficult decision</strong>: Use random forest. (Less tunning, less sparsity of feature importance than boosting, important to make also feature selection of the less overfitting ones). </p></li> <li><p>Last three days: As <a href="https://www.kaggle.com/c/LANL-Earthquake-Prediction/discussion/90664#latest-535844">this post</a> disclosed there was a potential leak that showed train-test relationship. Not a too easy to leverage leak, and risky to use though. Seeing how important it has been in the end some kind of systematic processing of the image would have been extremely effective, but I didn't do that. Instead I simply gained a couple of insights: </p></li> </ul> <p>First insight: outliers removal. Outliers are not just out of range values but more in general train data that is unusual vs. what is to be expected. I realized that the Earthques 5 and 6 were actually a "synthetic" split of what in the experiment was considered a single earhtquake, i.e. the stress theshold was different . <strong>Difficult decision</strong>: to remove those two earthquakes data of an already small dataset.</p> <p>Second insight: I had already observed that bimodal distribution of predictions was very different between train and test showing that shorter earquakes in test were way longer than shorter earthquakes in train. That was an ugly warning, and fitted quite well with what at a glance could be seen in the experiment. I visually estimated the proportion and lowered the weight of "small" earthquakes in training set by 1/5, no more fancy adjustment than that, noise was too big for more fine grained tunning anyway. <strong>Last unconfortable decision</strong>: select a submission that placed me like 2300th in public leaderboard cause it made sense.</p> <h3>ONE LAST WARNING</h3> <p>So, that was it. Hope it was of some interest. One last warning, don't take any of this decissions to rigidly (like using Random Forest, for example, I never choose a RF without a reason to do it). <strong>In the end, it is all about generalization and every competition is different</strong>. </p> <p>As I said in the beginning, just make sure to keep on learning and to have fun :-)</p> <p>Sure, it's a very interesting tool, here it's all you need to learn about it <a href="https://www.cs.ucr.edu/~eamonn/MatrixProfile.html">https://www.cs.ucr.edu/~eamonn/MatrixProfile.html</a></p>
LANL Earthquake Prediction
Virtual LB 130th: Mel spectrogram feature + class balance simple solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LANL Earthquake Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><a href="https://www.kaggle.com/daisukelab/lanl-solution-by-mel-spectrogram-dataset-2">https://www.kaggle.com/daisukelab/lanl-solution-by-mel-spectrogram-dataset-2</a> Sharing an approach which people might be interested.</p> <h3>Feature generation</h3> <ul> <li>Once converted into 62,900 mel-spectrogram data by fetching source 150,000 samples with sliding window. <ul><li>fs=6MHz, step=40000/34, n_fft=40000 =&gt; mel-spectrogram data shape=[62900, 128, 128]</li></ul></li> <li>Convert mel-spectrogram data into 3 aggregated features: mean, std, trend like. =&gt; data shape=[62900, 384]</li> </ul> <h3>Balancing dataset</h3> <ul> <li>Apply stratified class balancing described in my kernel <a href="https://www.kaggle.com/daisukelab/balancing-dataset-while-earthquake-is-happening">https://www.kaggle.com/daisukelab/balancing-dataset-while-earthquake-is-happening</a></li> <li>By virtually handing dataset as labeled set.</li> </ul> <h3>Model</h3> <ul> <li>Simple LGBM.</li> </ul> <p>Unfortunately I have deleted mel spectrogram dataset, I can post it upon your request later.</p>
LANL Earthquake Prediction
fast.ai, a viable time series solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LANL Earthquake Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I used a pretty simple implementation of fast.ai's tabular library and got a decent result(74th). I didn't think it would perform very well after reading about deep learning's usually poor performance for this type of problem.</p> <p><a href="https://www.kaggle.com/isaacshannon/isaac-fast-ai-evo">https://www.kaggle.com/isaacshannon/isaac-fast-ai-evo</a></p>
LLM - Detect AI Generated Text
113th solution: Reranking with Two-Stage Prediction
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LLM - Detect AI Generated Text <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to the competition host for organizing such an interesting competition!</p> <p>Our solution uses an ensemble approach, combining the TF-IDF based prediction with the two-stage prediction. (<a href="https://www.kaggle.com/code/reiendo73/daigt-ensemble-two-stage-prediction" target="_blank">notebook</a>)</p> <p>Two-stage prediction (Inspired by: <a href="https://www.kaggle.com/code/hsinhungwu/two-stages-prompt-prediction-custom-models" target="_blank">Two-Stages: Prompt Prediction + Custom Models</a>)</p> <ul> <li>Stage 1: Prompt prediction</li> <li>Stage 2: Predicting probabilities by training models for each prompt</li> </ul> <p>In stage 2, only the data for a specific prompt is used for training, so models are created specifically for that prompt. The distribution of predicted probabilities is different for each model and each prompt.</p> <p>We therefore used ranking of probabilities per prompt instead of raw probability values.</p> <p>The TF-IDF based prediction results for the overall data are reranked based on the ranking per prompt in the two-stage prediction results, and the average values before and after the reranking are calculated.</p> <p>An example is given below: the <code>generated</code> column contains the probability and the ranking per prompt.</p> <table> <thead> <tr> <th>id</th> <th>prompt_id</th> <th>generated (TF-IDFbased)</th> <th>generated (two-stage)</th> <th>generated (combining)</th> </tr> </thead> <tbody> <tr> <td>a</td> <td>1</td> <td>0.7, <strong>1</strong></td> <td>0.5, <strong>1</strong> → 0.7</td> <td>0.7, <strong>1</strong></td> </tr> <tr> <td>b</td> <td>1</td> <td>0.6, <strong>2</strong></td> <td>0.3, <strong>3</strong> → 0.2</td> <td>0.4, <strong>2</strong></td> </tr> <tr> <td>c</td> <td>1</td> <td>0.2, <strong>3</strong></td> <td>0.4, <strong>2</strong> → 0.6</td> <td>0.4, <strong>2</strong></td> </tr> <tr> <td>d</td> <td>2</td> <td>0.5, <strong>1</strong></td> <td>0.9, <strong>1</strong> → 0.5</td> <td>0.5, <strong>1</strong></td> </tr> <tr> <td>e</td> <td>2</td> <td>0.3, <strong>2</strong></td> <td>0.1, <strong>2</strong> → 0.3</td> <td>0.3, <strong>2</strong></td> </tr> </tbody> </table> <p>This method excludes the model specific differences in probability distributions in the two-stage prediction.</p> <p>Compared to using only the TF-IDF based prediction, the private score improved from 0.907 to 0.909.</p> <p>I hope this helps with your future ideas.</p>
LLM - Detect AI Generated Text
12th place solution: DeBERTa + TF-IDF
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LLM - Detect AI Generated Text <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I've seen a lot of people discussing this method and achieving improvements, but for me, I've tried many of the methods discussed and used some of the state-of-the-art methods in the field, but both TFIDF and Deberta, neither have shown any improvement. For me, the time for this competition was a bit short (since I joined the competition late). If I encounter a similar situation in the next competition, I will try to conduct more in-depth research.</p> <p>LB 0.89585<br> PB 0.89919<br> deberta-v3-large</p> <p>Thanks to the organizers for hosting such an interesting competition.<br> Final Solution used an ensemble of DeBERTa and TF-IDF.</p> <h2>Key point</h2> <ol> <li>TF-IDF uses test dataset</li> <li>As broad of data as possible</li> </ol> <p>Let me share my thoughts on TFIDF. This method only considers the n-grams from the test set, which is the set that needs to be predicted. This means that the n-grams we obtain from the test set are definitely clean and useful features. Using these features to train on the training set is very correct and efficient. From another perspective, this is tantamount to leaking data (features) from the test set. In Kaggle competitions, data leakage can be particularly useful, so there's no reason not to employ it.</p> <h2>Dataset</h2> <p>In the early stage, data was generated for each prompt in the Persuade corpus. However, when I trained the DeBERTa model on this data, I found that it performed poorly on LB. Therefore, I gave up using this part of the dataset for training and used it as my local CV dataset. This competition made me think of Stable Diffusion - Image to Prompts, so I started considering trying to use a more extensive dataset.</p> <p>In the process of searching for datasets, I found that there were many datasets to choose from. After careful consideration, I selected the tiiuae/falcon-refinedweb dataset. The reason for choosing this dataset is mainly because it has undergone detailed filtering and performed better than other datasets in Falcon's paper.</p> <p>Because my goal is to find the most comprehensive data possible, I will consider the entire text of the first chunk file in dataset as a starting text pool. For the text in other chunk files, I will calculate the similarity(using BAAI/bge-base-en-v1.5) between the current text and all the text in the previous text pool. Only when the similarity is below the threshold (0.7) will the text be added to the text pool.</p> <p>After length filtering (&gt;768 tokens), the final dataset contains approximately 600k data.</p> <p>For each text, I randomly extract a part of the text. Then, the text of random length(128~384) at the beginning is taken as context guidance, while the remaining text is used as training data (generated=0). For the extracted context, input to different LLMs to continue writing the text based on the style of the context (generated=1).</p> <p>Based on my experience and observation, the quality of LLMs generated by all other models except mistral and llama2 is not very high, so I mainly use these two LLMs and their variations. </p> <p>Some LLMs generate very short sequences, and most of these short sequences are of low quality, so I added a length filter(&gt;170 tokens) afterwards.</p> <p>Finally, LLM generated approximately 410k of data. Therefore, the final data set consists of 820k text with a 1:1 ratio of positive and negative samples.</p> <p>For a given context, I have a real text and a text generated by LLM, with the lengths of the two texts varying significantly. Therefore, during the training process of DeBERTa, the text is randomly truncated to randint(128, min(origin_len, LLM_len)), which ensures that the length distribution of texts with different labels is consistent during the training process, avoiding the model from learning the irrelevant variable of text length.</p> <h2>Model</h2> <h3>deberta</h3> <ul> <li>model id: deberta-v3-large</li> <li>train &amp; infer length: 512</li> <li>epoch: 3</li> <li>head: mean-pooling</li> <li>LLRD</li> </ul> <h3>TF-IDF</h3> <p>copy from <a href="https://www.kaggle.com/code/batprem/llm-daigt-analyse-edge-cases" target="_blank">LLM DAIGT Analyse edge cases</a></p> <p>I attempted to make some modifications, but the improvement was not significant enough for me to adopt them. <br> There must be a huge difference between the distribution of local data and test set. In order to avoid overfitting on local data and better utilize the test set, I have decided not to make any modifications.</p> <h3>Ensemble</h3> <p>TF-IDF : deberta = 0.85 : 0.15</p> <p>The weights are determined based on the average of local and LB performance.</p> <h2>Abandoned idea</h2> <p>The following ideas were evaluated after initial attempts and found to be possibly beyond my capabilities or require a significant amount of computation, so they were abandoned halfway through. Perhaps there are better methods among these ideas, but I don't have the time/GPU to try them out.</p> <ol> <li><p>Finetuning LLM to mimic the style of Persuade</p> <p>After fine-tuning, I found that the quality of LLM generated has decreased a lot and it did not make DeBERTa more difficult to distinguish, so I abandoned that approach.</p></li> <li><p>Adversarial learning</p> <p>Let LLM generate data, DeBERTa selects difficult-to-distinguish hard samples, LLM continues to train on the hard samples and generate…</p> <p>After evaluating the required computing, I decisively gave up.</p></li> <li><p>Finetuning LLM to classify</p> <p>It's need too much computing power. I sampled 82k from the 820k dataset to train Mistral-7b. Each epoch takes more than 10 hours using 3090Ti, but the score on LB is only 0.642. I don't have enough resources to keep trying.</p></li> </ol> <h2>What don't work for me</h2> <ol> <li>soft label</li> <li>text classification based on spelling errors</li> <li>fix spelling errors, tried many solutions, distance-based and neural network-based, but none of them worked</li> <li>prompt engineer on deberta</li> </ol> <h2>Acknowledgements</h2> <p>Thanks to all the enthusiastic members of kaggle community for providing datasets.</p> <p>Thanks to <a href="https://www.kaggle.com/batprem" target="_blank">@batprem</a> for providing a high score notebook. Special thanks to <a href="https://www.kaggle.com/datafan07" target="_blank">@datafan07</a> for providing interesting insights. </p> <p><code>How long it takes to run each epoch?</code><br> Deberta large cost 4 hour per epoch on 4*RTX4090<br> <code>FinalProb = Prob1 * 0.85 + Prob2 * 0.15</code><br> Yes</p>
LLM - Detect AI Generated Text
13th place solution - Transformers only
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LLM - Detect AI Generated Text <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you, I was going to select my transformers ensemble - I didn't know that it would perform so well. In many previous NLP competitions ensemble of transformers have worked well , can't say the same for test set - TFIDF approach which was very popular here. Also I wasn't going for efficiency and we could select 3 subs - so this had to be one of them. I used Gemini Pro and GPT 3.5/4 for generating data, spent lot of time on Mistral - I was able to generate but it didn't improve LB. </p> <p>All model training was done locally (single 4090 sometimes on vast.ai 1x 4090) - only inference was done on Kaggle</p> <p>Thank you <em>The Learning Agency Lab and Kaggle</em> for hosting another competition. </p> <p><strong>My solution</strong> <br> Score - private LB 0.938 and public 0.957 <br> My best private submission was also the selected submission (this would have been about rank 2000 on public LB)</p> <p>Selected submissions (sorted by private score) - </p> <p><a href="https://postimg.cc/0rQ424Nr" target="_blank"><img src="https://i.postimg.cc/66LxzJJV/Screen-Shot-2024-01-24-at-11-54-35-AM.png" alt="Screen-Shot-2024-01-24-at-11-54-35-AM.png"></a></p> <p><strong>Dataset</strong> <br> I used some of the publicly shared datasets. Amongst all the datasets I tried <a href="https://www.kaggle.com/thedrcat" target="_blank">@thedrcat</a>'s v4 was the magic dataset for me. Thank you Darek. I also generated some data on my own. </p> <p>Here I share some of the Prompts that I used -</p> <pre><code>System Message f Prompt </code></pre> <p>Persuade Corpus <a href="https://zenodo.org/records/8221504" target="_blank">Paper</a> has race/ethnicity / gender information - I tried to incorporate some of these in the prompts. Also ensuring that there's a good balance of all 7 prompts.</p> <p>I also removed duplicates that were in persuade corpus and provided training set. I used <code>BAAI/bge-base-en-v1.5</code> embeddings and cosine similarity to find matches. </p> <p>Overall I had a dataset of about 45K samples. </p> <p><strong>Cross validation</strong><br> Practically none, unless the dataset is fixed we can’t evaluate and compare. I was experimenting with different datasets frequently. Towards the end I did fix the dataset and started observing CV at 4th decimal place - which didn’t really work. </p> <p><strong>Modeling</strong><br> Ensemble of 3 transformer models - </p> <p>All these were trained with the same hyper parameters - max len=512 (anything higher didn’t work on LB) , 5 folds. </p> <ul> <li>DebertaXLarge - Private 0.92 - Public 0.939 - Ensemble Weight - 50%</li> <li>Deberta v3 Large - Private 0.847 - Public 0.914 - Ensemble Weight - 25 %</li> <li>Roberta Large - Private 0.849 - Public 0.923 - Ensemble Weight - 25 %</li> </ul> <p>Neptune.ai for logging - first time user, pleasant experience, will try next time</p> <p><strong>Other Submissions</strong></p> <p>My other selected submissions (best public LB - 0.97, private - 0.927 - public rank 42) had TF-IDF based approaches more or less similar to public kernels. I ensembled these with my own deberta models and score goes up to 0.97 on public LB. I didn’t try to tune hyperparameters for the TF IDF models, they were from public kernels and it seemed like they were tuned collaboratively anyways on public LB. I did change the datasets a little for these though. Some of the datasets that seemed to work with transformer models didn’t really work well with boosting/MNB/SGD models. So I ended up using different datasets for different models.</p> <p><strong>Things didn’t work</strong></p> <ul> <li><p>Generating data with mistralai/Mistral-7B-Instruct-v0.2 after fine tuning with Persuade Corpus (similar to Darek’s dataset) but I wasn’t able to make this work. Should have used base <a href="https://www.kaggle.com/competitions/llm-detect-ai-generated-text/discussion/470395#2617818" target="_blank">Mistral model</a></p></li> <li><p>Some other transformer models - Electra, other deberta variants. </p></li> <li><p>MLM pretraining with deberta v3 large.</p></li> <li><p>Log probabilities with LGBM - these seemed to have done well in private, I didn’t get to spend much time on it though. </p></li> <li><p>Fine tuning 7B model - poor private LB 0.69 and public LB ~0.85 (100% AUC)</p></li> <li><p>Tensorflow models from public kernels</p></li> <li><p>Pretty much no hyperparameter tuning/different heads for transformer models also because no validation set. </p></li> <li><p>I couldn’t try a lot of the other things that I normally would in any other competition because there’s no validation set. I am hoping this “no dataset” and “no validation set” doesn’t become a norm on Kaggle. </p></li> </ul> <p><strong>Summary</strong> - Just transformer models performed better compared to TFIDF approaches or even ensemble of both approaches for me. Also final submission selection is very important !</p> <p>Thank you for reading ! </p> <hr> <p><strong>Links to Models/Code</strong></p> <p><a href="https://www.kaggle.com/code/rashmibanthia/llm-detect-13th-gold-solution/notebook" target="_blank">Ensemble Inference</a></p> <p><a href="https://www.kaggle.com/models/rashmibanthia/llm-detect-deberta-xlarge" target="_blank">Deberta xLarge Model</a> and <a href="https://www.kaggle.com/code/rashmibanthia/llm-detect-debertaxlarge" target="_blank">Deberta xlarge Inference code</a></p> <p><a href="https://www.kaggle.com/models/rashmibanthia/llm-detect-deberta-v3-large" target="_blank">Deberta v3 Large Model</a> and <a href="https://www.kaggle.com/code/rashmibanthia/llm-detect-deberta-v3-large-inference" target="_blank">Deberta v3 Large Inference code</a></p> <p><a href="https://www.kaggle.com/models/rashmibanthia/llm-detect-roberta-large" target="_blank">Roberta Large Model</a> and <a href="https://www.kaggle.com/code/rashmibanthia/llm-detect-robertalarge" target="_blank">Roberta Large Inference code</a> </p> <p>Thank you 🙏 <a href="https://www.kaggle.com/aman1391" target="_blank">@aman1391</a> </p> <p>Since there was no cv - weights were guesstimates based on public LB score for individual models. Otherwise I would have certainly tried NelderMead / Stacking for ensemble. </p>
LLM - Detect AI Generated Text
14th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LLM - Detect AI Generated Text <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I have been working hard towards achieving Solo Gold for several years. <br> This time, I am very happy to have reached Solo Gold, and I would like to express my gratitude for the many discussions and notebooks that have taught me along the way. <br> Thank you.</p> <p>I apologize for only having a simple solution prepared, butI would like to share it briefly.</p> <h1>14th Place Solution</h1> <p>My solution is based on the following notebook. <br> <a href="https://www.kaggle.com/code/batprem/llm-daigt-analyse-edge-cases" target="_blank">LLM DAIGT Analyse edge cases</a><br> (special thanks)</p> <p>While there are some minor adjustments, I made significant contributions with two main additions:</p> <ol> <li>roberta-large-openai-detector [1]</li> <li>Pseudo-Label</li> </ol> <p>These additions have played a crucial role in enhancing the overall solution.</p> <h2>roberta-large-openai-detector</h2> <p>I considered the addition of BERT model-based classification crucial to differentiate my approach from methods based solely on word frequency.</p> <p>Among various experiments, this model achieved a score of 0.813 on the Leaderboard without requiring tuning. (private score → 0.839)</p> <p>I experimented with various tuning techniques, but unfortunately, I couldn't manage to control overfitting.</p> <p>So, I added the output without tuning as a feature to my model.</p> <p>Since there is a limitation on the number of input tokens in the model, I used the first 512 tokens from the head and the last 512 tokens from the tail to obtain two sets of predictions.</p> <p><a href="https://www.kaggle.com/code/batprem/llm-daigt-analyse-edge-cases" target="_blank">LLM DAIGT Analyse edge cases</a></p> <p>I trained LightGBM and XGBoost models independently using these BERT predictions as well as the word frequency features from the publicly available notebook.</p> <p>While the LB score of each individual model falls behind the one in the public notebook, ensembling them with the predictions from the public notebook led to an improvement in the LB score. (LB Score 0.965, Private Score 0.930)</p> <h2>Pseudo-Label</h2> <p>Considering the high LB score and along with the tendency for overfitting during training, I believe Pseudo-Labeling is effective.</p> <p>I picked up outputs from the first-stage predictions in which I had confidence.</p> <p>The outputs of the BERT model excelled at predicting automatically generated ones (label 1), and considering the limited number of automatically generated examples (label 1) in the training data, I utilized the top 10% for label 1 and the top 7.5% for label 0.</p> <p>Furthermore, to avoid bias towards specific topics, I assigned Pseudo-Labels based on prompt_id.</p> <p>While there are various small refinements, these two contributions, especially, made significant impacts.<br> ((example) adjustment of training weight, noise data filtering, reducing features for faster processing, etc…)</p> <p>[1] Solaiman, I., Brundage, M., Clark, J., Askell, A., Herbert-Voss, A., Wu, J., … &amp; Wang, J. (2019). Release strategies and the social impacts of language models. arXiv preprint arXiv:1908.09203.</p>
LLM - Detect AI Generated Text
18th place solution Generating Adversarial Data
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LLM - Detect AI Generated Text <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks for my teammates and competition hosts for organizing this amazing competition</p> <p>A brief summary of we did:<br> 40% weight in TFID method. An average of 2 , 3, 4, and 5 n grams<br> 60% weight in LLM ensemble. Mostly 7B Mistral/llama models. Our best submission used only our adversarially selected dataset described below.</p> <p>The dataset we used was a dataset constructed through adversarial selection. We relied on all the awesome public datasets that were shared in this competition.<br> We train a small language model over 10% of the dataset. Predicted the rest. And then chose the hardest questions. Mixed it 50/50 with the original persuade dataset.</p> <p>We actually found this method pretty late and our submissions were made on the final day using hastily trained models. I think this method could be potentially improved by using a better base model to select the adversaries, training larger lora rank, etc…</p>
LLM - Detect AI Generated Text
19th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LLM - Detect AI Generated Text <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to Kaggle, the host and the community for this competition!</p> <p>I'm glad I survived the storm. A bit thirsty as I'm so close to gold, but given I had no clue how to score a medal up until a week or so ago, I feel it turned out fine for me. </p> <p>My main goal coming into the competition was moving up the ranks on datasets, notebooks and discussions. I feel I did really well there, thanks for all the upvotes! I might turn into discussions GM soon :) </p> <h1>Aproach</h1> <p>I approached this competition in 3 phases: </p> <ol> <li>Data generation. I expected the key is generating data similar to how the host generated it, and finding good CV/LB correlation. I made good progress there but never found CV/LB correlation. </li> <li>Implementing and testing single models and methods. I only started doing this seriously in January, and given lack of good CV and limited attempts at public LB, I feel I didn't fully reach the potential of these approaches (see a list below)</li> <li>Ensembling. I started ensembling in the last week. The goal was to create a diverse and robust ensemble that will survive the shake. </li> <li>Postprocessing. Thanks to the sharing from <a href="https://www.kaggle.com/piotrkoz" target="_blank">@piotrkoz</a>, I used his code to fix the train and test dataset for some of the models in ensemble on the last day. </li> </ol> <p>I didn't have time or energy to probe public LB much, which I think was good in hindsight. </p> <h1>Solution</h1> <p>Final solution is a weighted blend of:</p> <ol> <li>Ridge classifier on V2 + sample of V4 datasets, public TF-IDF and test tokenization pipeline, spelling correction. </li> <li>Multinomial, SGD, Catboost models from the top public notebooks (V2, TF-IDF, test tokenization) with spelling correction</li> <li>LightGBM with weighted sampling from V3 dataset based on source and a 2-step pipeline based on TF-IDF and test tokenization. First I train a lightweight model, take top 10k most important features, then train a heavier LGBM model on those selected features. </li> <li>Ranking based on <code>flesch_kincaid_grade_level</code> and <code>flesch_reading_ease metrics</code>.</li> <li>Deberta-v3-large trained on a subset of V4 dataset. </li> <li>Mistral-instruct-7B trained with QLORA on V4 dataset, using <a href="https://www.kaggle.com/hotchpotch" target="_blank">@hotchpotch</a> pipeline</li> </ol> <h1>Failed experiments</h1> <p>Some experiments that didnt' work for me (reading other top solutions I probably didn't spend enought time to make them work):</p> <ol> <li>Mistral-based reward model trained on pairs of original and generated text (new dataset I created in the last 2 days and didn't get to share)</li> <li>Ghostbuster re-implementation with Llama probabilities. Since this performed so well for #1 team, I should probably revisit my code. </li> <li>Various models trained on quantized unigram and trigram probabilities of words in essays. </li> <li>Deberta-based ranking. I trained a model in multiple choice architecture, compared random pairs of essays in test, and then trained a Bradley-Terry model on those comparisons. </li> <li>NER model. I mixed random samples of human- and AI-generated essays and trained a NER model on tokens. It worked really well in my experience, I then used various features to predict text-based score. The best feature was number of consecutive tokens above certain probability threshold to make it AI generated. This is the interpretable method I shared in one of the discussions, and while it didn't work well on the LB, I feel it could be very useful. </li> <li>Ridge forest - I experimented with combining some ideas from RF/GBDT - feature and data subsampling - with linear classifiers like Ridge. It added a bit of robustness but also execution time, so I removed it at the end.</li> </ol> <p>Thanks to everyone that shared ideas, datasets, code and memes (very much needed!) in the competition!</p> <p>lol no gold for me, maybe other cats got it. I liked how interpretable NER is. It didn't work too well on the public/private though and I didn't use it. Probably because I didn't spend enough time tuning on the right data. </p> <p>Thanks a lot Raja! For the reward model, I used <a href="https://www.kaggle.com/code/datafan07/single-model-rewardtrainer-lora-llm" target="_blank">science exam pipeline</a> from <a href="https://www.kaggle.com/datafan07" target="_blank">@datafan07</a>. Unfortunately, this was the last night experiment that failed when I was sleeping, so I was only able to train it for ~1hour which wasn't enough. </p> <p>I got feedback from public LB, but I didn't do too much tuning. </p> <p>The challenge for me was lack of good CV/LB correlation, how can I tune a model if I can't properly evaluate it? I didn't do too many experiments for this reason, just tried a few settings while looking at the training loss and validation metrics to make sure they look fine. The key to a good model is a good dataset, top teams did an awesome job on this element. </p>
LLM - Detect AI Generated Text
24th solution: DeBERTa & TF-IDF Vectorizer Ensemble
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LLM - Detect AI Generated Text <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you so much for your congratulations and interest in our solution! <br> We turned our notebook public. check this link :) <br> <a href="https://www.kaggle.com/code/kimseunghee/24th-place-notebook-public-0-966-private-0-927" target="_blank">https://www.kaggle.com/code/kimseunghee/24th-place-notebook-public-0-966-private-0-927</a></p> <p>Thanks! In Discussion and Code, Kagglers shared many datasets. A lot of data came from there!</p> <p>Thanks to the host for opening this competition. We learned valuable lessons that will undoubtedly contribute to our growth. </p> <p>Here is the brief solution of our's.<br> We used an ensemble of DeBERTa models &amp; TF-IDF Vectorizers.<br> Details are below.</p> <h1>1. DeBERTa</h1> <h3>CFG</h3> <p>Model: DeBERTa-v3-large<br> epoch: 5~6<br> train max_length: 256~300<br> inference max_length: 512</p> <h3>Data</h3> <ul> <li>Human Data: 300k++</li> <li>AI Data: 300k++ (included self-made data via ChatGPT3.5, 4)</li> <li>Used diverse Data, not only 'RDizzl3_seven'.</li> <li>AI generated text with more than 10 typos all excluded. typo checked by Pyspellchecker.</li> <li>CV split: Stratified KFOLD - 0.9:0.1 </li> </ul> <h1>2. TF-IDF Vectorizer</h1> <p>Special Thanks to <a href="https://www.kaggle.com/datafan07" target="_blank">@datafan07</a> for providing us the amazing public notebook! It was awesome.</p> <h3>Model</h3> <p>MultinomialNB, SGDClassifier, LGBMClassifier</p> <h3>Data</h3> <ul> <li>DAIGT-V3 by <a href="https://www.kaggle.com/thedrcat" target="_blank">@thedrcat</a> and we added our self-made data via ChatGPT3.5, 4</li> <li>[llama, ada, babbage, claude] -&gt; we excluded these data since a lot of them had too much typo.</li> <li>Used diverse Data, not only 'RDizzl3_seven'.</li> <li>AI generated text with more than 10 typos all excluded. typo checked by Pyspellchecker.</li> </ul> <h1>Ensemble</h1> <ul> <li>TF-IDF Vectorizer: MultinomialNB x 1, SGDClassifier x 3, LGBMClassifier x 3</li> <li>Deberta x 4</li> <li>TF-IDF Vectorizer : DeBERTa = 0.65:0.35</li> </ul> <p>Thanks again to the competition hosts and everyone who participated!! Good luck everyone for your next competitions!</p> <p>You can check our submission notebook here: <br> <a href="https://www.kaggle.com/code/kimseunghee/24th-place-notebook-public-0-966-private-0-927" target="_blank">https://www.kaggle.com/code/kimseunghee/24th-place-notebook-public-0-966-private-0-927</a></p> <h3>Team member</h3> <p><a href="https://www.kaggle.com/danielchae" target="_blank">@danielchae</a> <br> <a href="https://www.kaggle.com/kimseunghee" target="_blank">@kimseunghee</a> </p> <p>Thanks! :) our single deberta model's best Public score was 0.872, best Private score was 0.836. But when combined with TF-IDF Vectorizer, we got a better score.</p> <p>We didn't train our LGBMClassifier online. We used LGBM parameters based on Public Notebooks, tuning the values of the parameters randomly just a little bit :)</p> <p>Thanks! we think it worked since the competition metric was ROC-AUC. If the metric was different, it may had a different result.</p> <p>Thanks! :)<br> In case of DeBERTa, CV was not reliable(0.999+) so we only excluded models with CV under 0.9990<br> In case of TF-IDF vectorizer, CV was not reliable also. Because of that, we made our decision based on Public scores of various experiments.</p>
LLM - Detect AI Generated Text
25th Place Solution for DAIGT (Public LB: 0.966 Private LB: 0.927)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LLM - Detect AI Generated Text <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>We are very happy to share our solution and are so grateful to everyone on Kaggle for sharing your ideas, codes, and datasets.</p> <h2>Overview</h2> <p>Our final solution follows this workflow:</p> <ol> <li>Data cleaning and typo correction</li> <li>BPE tokenizer training + TF-IDF feature extraction</li> <li>Models: MultinomialNB x 1, SGDClassifier x 1, LGBMClassifier x 1, CatBoostClassifier x 1</li> <li>Ensemble and identification of samples with prediction probability between 0.05 and 0.45 as hard samples</li> <li>Use the Mistral-7B model to predict the hard samples and DistilBert to predict all samples</li> <li>Use samples with blend probabilities less than 0.05 and greater than 0.5 as labeled data for unsupervised learning</li> <li>Blend the results of steps 4, 5, and 6 for the final prediction</li> </ol> <h2>Solution Details</h2> <ol> <li><p><strong>Data cleaning and typo correction</strong></p> <p>Inspired by <a href="https://www.kaggle.com/competitions/llm-detect-ai-generated-text/discussion/468767" target="_blank">piotrkoz's Discussion</a>, we delved deeper into typo correction. We began by correcting large-scale character substitutions, followed by identifying and correcting words with a Levenshtein distance of 1 to misspelled words. This process was applied to both the training and test data to ensure consistency.</p> <pre><code>text = text = correct_substitutions(text, ) text = correct_dist1(text) </code></pre> <p>These corrections provided a <strong>~0.01</strong> improvement on the private LB.</p></li> <li><p><strong>LLMs for ensemble</strong></p> <p>We fine-tuned Mistral-7B with LORA on the DAIGT-v2 dataset, which yielded a 0.884 on the public LB. Although the model did not perform as well on the private LB, we had no way of doing this before the competition ended. We also attempted to fine-tune DeBERTa-v3 but were unable to improve upon the public model. Thus, we used the public DistilRoberta model and our Mistral-7B model to test data predictions.</p> <p>We used the LLM predictions only for "hard samples" because we observed that TF-IDF features could robustly predict most samples, and LLMs might reverse predictions on some samples.</p></li> <li><p><strong>Semi-supervised Learning</strong></p> <p>We were impressed by <a href="https://www.kaggle.com/code/aerdem4/daigt-superfast-unsupervised-baseline" target="_blank">aerdem4's notebook</a>, which achieved a 0.91 on the public LB using only the test set and two "magic words". However, we questioned the method's robustness, as its performance heavily depended on the choice of magic words and the distribution of samples for each prompt ID. This could also explain the model's underperformance on the private LB. Ultimately, we discarded the magic words approach and used TF-IDF prediction probabilities to label samples as data, which led to a <strong>~0.02</strong> improvement on the private LB on the last day.</p></li> </ol> <h2>Team Members</h2> <p>Gengyang Xiao <a href="https://www.kaggle.com/adrianxiao" target="_blank">@adrianxiao</a>, Jiayang Zhao <a href="https://www.kaggle.com/superxiaotang" target="_blank">@superxiaotang</a>, Junrui Wang <a href="https://www.kaggle.com/junruiwang" target="_blank">@junruiwang</a>, Zhiyang Zhang <a href="https://www.kaggle.com/zyzhang0109" target="_blank">@zyzhang0109</a> (in alphabetical order)</p> <p>Feel free to connect with any of us via Kaggle or LinkedIn.</p> <h2>Acknowledgement</h2> <p>Many thanks to <a href="https://www.kaggle.com/thedrcat" target="_blank">@thedrcat</a>, <a href="https://www.kaggle.com/aerdem4" target="_blank">@aerdem4</a>, <a href="https://www.kaggle.com/mustafakeser4" target="_blank">@mustafakeser4</a>, <a href="https://www.kaggle.com/piotrkoz" target="_blank">@piotrkoz</a>. </p> <p>We learned a lot from you. Thank you!</p> <p>Thank you, Valentin. The notebook you mentioned runs for 9 hours spends most of its time on Catboost. We found that removing Catboost can reduce the Public LB score, so we changed it to 2000 iterations to save three extra hours of running time.</p> <p>We verified the improvement of adding the Mistral-7B model using our validating hard samples, and it turned out that the observation still holds on the Private LB(<strong>~0.01</strong>). However, since the model was fine-tuned by us, I am not sure whether using the public fine-tuned Mistral-7B model can lead to any improvement.</p> <p>Thanks, man. We really love your idea. Thanks again for your sharing!</p> <p>Thanks Darek! The dataset you made is GOAT!</p> <p>Yes, you are right. We blended them first and used the result semi-supervised. The reason behind this is we think the semi-supervised method relies on the accuracy of the label, so we want to make it as precise as possible.</p> <p>The thresholds we used first came from our experiments on the validation set and then <strong>verified on Public LB by applying some intended post-processing</strong>. To avoid overfitting the Public LB, we left some margin to make our observation still hold in Private LB. That's how we got 0.05, 0.45, and 0.5</p> <p>The reason for using the semi-supervised method is simple. Firstly, we found by using the predictions to label the samples. We can get better results than the "magic words" aerdem4 used. Secondly, we think kNN can use the information from the test set, which could boost our robustness on the test set.</p> <p>My answer is yes and no. Like everyone in this competition, our CV was really high and unreliable. What we know from the validation set is the LLMs can classify the samples that did not perform well on the TF-IDF method. We guess the reason is LLMs focus more on semantic features compared to TF-IDF.</p>
LLM - Detect AI Generated Text
29th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LLM - Detect AI Generated Text <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Big thanks to the organizers for holding this competition. It has been a tremendous learning experience. <br> I would like to extend my heartfelt thanks to my team members, <a href="https://www.kaggle.com/drpatrickchan" target="_blank">@drpatrickchan</a> and <a href="https://www.kaggle.com/jerifate" target="_blank">@jerifate</a>, for their invaluable contributions and collaboration. Additionally, my sincere appreciation goes to the entire community for sharing their insights.</p> <p>This is the brief summary of the solution.</p> <h1>TF-IDF Model (from public notebook):</h1> <ul> <li>Source: <a href="https://www.kaggle.com/code/batprem/llm-daigt-excluded-prompts" target="_blank">LLM DAIGT excluded prompts</a></li> <li>Scores: Private - 0.895, Public - 0.963</li> </ul> <h1>Distil RoBERTa Model (from public notebook):</h1> <ul> <li>Source: <a href="https://www.kaggle.com/code/mustafakeser4/inference-detectai-distilroberta-0-927" target="_blank">[inference]DetectAI DistilRoberta[0.927]👍</a></li> <li>Scores: Private - 0.884, Public - 0.927</li> </ul> <h1>LSTM&amp;Transformer CNN Approach:</h1> <ul> <li>Inspired by: <a href="https://www.kaggle.com/code/ichigoe/ensemble-transformercnn-roberta" target="_blank">Ensemble_TransformerCNN&amp;Roberta</a></li> <li>Enhancement: Additional Category Features such as Flesch reading ease, Fog index, Score to grade level, Comprehension level, and Polarity were added.</li> <li>Scores: Private - 0.766, Public - 0.926</li> </ul> <h1>Deberta Model:</h1> <ul> <li>Dataset: 900k rows, including various sources like llm-mistral-7b-instruct-texts, daigt-external-dataset, etc.</li> <li>Highlight: <a href="https://www.kaggle.com/datasets/thedrcat/daigt-v4-train-dataset" target="_blank">daigt-v4-train-dataset</a> with magic was particularly helpful for me. Thanks to <a href="https://www.kaggle.com/thedrcat" target="_blank">@thedrcat</a> for <a href="https://www.kaggle.com/competitions/llm-detect-ai-generated-text/discussion/468246" target="_blank">the discussion</a></li> <li>Training Approach: Deberta-v3-large with 1024 max length. Additionally, custom truncation (see below) and label smoothing with a factor of 0.1 were used.</li> </ul> <pre><code>def custom_truncate(, tokenizer_in, max_length=MAX_LEN): = sent_tokenize() tokenized_sentences = [ tokenizer_in.encode(, add_special_tokens=False) ] total_tokens = (() tokenized_sentences) total_tokens &lt;= max_length: tokens_so_far = i, sentence_tokens enumerate(tokenized_sentences): tokens_so_far += (sentence_tokens) tokens_so_far &gt; max_length: max_start_index = i break start_index = .randint(, (max_start_index - , )) truncated_tokens = [] sentence_tokens tokenized_sentences[start_index:]: (truncated_tokens) + (sentence_tokens) &lt;= max_length: truncated_tokens.extend(sentence_tokens) : break truncated_text = tokenizer_in.decode( truncated_tokens, skip_special_tokens=True, clean_up_tokenization_spaces=True ) truncated_text def tokenize_with_custom_truncation(examples, max_length=MAX_LEN): text_li = examples[] truncated_text_li = [] text_li: truncated_text = custom_truncate(, tokenizer, max_length) truncated_text_li.append(truncated_text) tokenizer( truncated_text_li, padding=True, truncation=True, max_length=max_length, ) </code></pre> <ul> <li>I believe this randomness helped to have a robust model</li> <li>Scores: Private - 0.901, Public - 0.931</li> </ul> <h1>Ensemble Approach:</h1> <ul> <li>Weight Distribution: TF-IDF (0.5), Distil RoBERTa (0.1), Three Tower (0.15), Deberta (0.25)</li> <li>Strategy: Balanced emphasis between TF-IDF and other models for effective performance.</li> <li>Final scores: Private - 0.921, Public - 0.970</li> </ul> <p><em>Update: <a href="https://www.kaggle.com/code/ympaik/29th-solution/notebook" target="_blank">the solution code has been released.</a></em></p> <p>It was about full 8 hours. We had to worry about the time-out error when testing ensembles. </p>
LLM - Detect AI Generated Text
2nd place solution with code and data
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LLM - Detect AI Generated Text <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I have NVIDIA RTX A6000 x 3</p> <p>I want to apologize that I actually used some llama chat versions for the data generation, so my previous statement is wrong.😅</p> <p>I updated the training code to include the generation code. Please read the updated description for more details. </p> <p>I updated the training code to include the generation code. </p> <p>I think I just copied this notebook<br> <a href="https://www.kaggle.com/code/yongsukprasertsuk/llm-daigtext-0-961/notebook" target="_blank">https://www.kaggle.com/code/yongsukprasertsuk/llm-daigtext-0-961/notebook</a></p> <p>Thanks to Kaggle, Vanderbilt University, and The Learning Agency Lab for hosting this competition. Detecting AI-generated text has been an important research topic with the rise of LLMs, and I'm sure the methods developed in this competition will contribute to this research area.</p> <p>Link to inference code: <a href="https://www.kaggle.com/code/wowfattie/daigt-2nd-place" target="_blank">https://www.kaggle.com/code/wowfattie/daigt-2nd-place</a><br> Link to training code and data: <a href="https://www.kaggle.com/datasets/wowfattie/daigtpretraindata" target="_blank">https://www.kaggle.com/datasets/wowfattie/daigtpretraindata</a></p> <p><em>Here is my solution:</em></p> <p>I started working on this competition relatively late. At first, I did some finetuning with deberta models on datasets generously shared by other competitors, all my initial trials resulted in nearly perfect CV but much lower and unstable public LB score. After realizing this finetuning would not go anywhere, I started playing with the tfidf kernels and forming ideas to improve it. During this process, <a href="https://www.kaggle.com/competitions/llm-detect-ai-generated-text/discussion/465882" target="_blank">James's post</a> on creating large and diverse dataset and training a more general model grabbed my attention, and I immediately realized this could be an important "pretraining" step in a winning solution. So I abandoned my to-do list and invested all my efforts and computing resources to reproduce his approach. In several days I created about half a million human/AI pairs from the SlimPajama dataset with various LLMs. After finetuning on this dataset using deberta-v3-large, public/private LB was at 0.916/0.967. This public score was much lower compared with what James reported, probably due to some difference in generation settings, the LLMs used, or data selection.</p> <p>After this "pretraining" step, the original "language" model was transformed to a general "human/AI text generation" classification model. Since we know the test prompts are from persuade corpus only, it's worth doing another finetuning to adapt the model to those essays. After looking into the persuade corpus csv, I noticed that all the 5 prompts in test data ask students to write essays based on the contents of some articles, but I could not find those articles anywhere (other than the Venus one). The issue is if those articles were used for generating the test set by the host, the AI texts could be very similar to human texts. To close this gap, I did language model (LM) finetuning on the student essays so the LLMs could generate texts mimicking students' writing, including citations from the original articles and even typos. To perform the LM finetuning, <a href="https://github.com/h2oai/h2o-llmstudio" target="_blank">llm-studio</a> is the best tool, no code was required, just prepared the csv file and clicked on the GUIs, and the models were ready for essay generation. Finetuning the general classifier on this dataset improved the public/private LB to around 0.94/0.98. My final 2nd place kernel contains an ensemble of two deberta-v3-large models and one deberta-large model, I also finetuned the three models on the <a href="https://www.kaggle.com/datasets/thedrcat/daigt-v4-train-dataset" target="_blank">DAIGT-V4-TRAIN-DATASET</a> and added them in the ensemble, the public/private LB score became 0.967/0.983. My best public score was achieved by blending the above deberta predictions with the 0.961 tfidf kernel, but the private score is lower.</p> <p>Thanks. I used llama2s and mistral. But I didn't try any instruct or chat version because I believe any further finetuning weakens the foundation models. I will share more today.</p> <p>Thanks. I will share all my data generation code today.</p>
LLM - Detect AI Generated Text
3rd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LLM - Detect AI Generated Text <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi. I am using 8 cores Intel Xeon, A6000 GPU with 48 GB RAM or A100-80GB GPU with 96 GB RAM, depending on availability </p> <p>Hi, I am using a regular pytorch training loop </p> <p>Hi! Sorry for the confusion. What I tried to say is that we trained 4 models with the weights from hugging face, and 5 models with the weights from the models trained on 500k+ generated data. </p> <p>Both of these are fine-tuning since we didn't train the models from scratch, but I used the term "finetuning" only in the second case since I wanted to highlight that the states are from the previous point in the write-up. </p> <p>Thank you! Best of luck to you too! </p> <p>Many thanks to the organizers for creating the competition.</p> <p>Our solution is a weighted average of tfidf pipeline and 12 deberta-v3-large models.</p> <h3>Transformers Ensemble</h3> <p>As a preprocessing step, we used the deobfuscator shared by <a href="https://www.kaggle.com/sorokin" target="_blank">@sorokin</a> (<a href="https://www.kaggle.com/competitions/llm-detect-ai-generated-text/discussion/457819" target="_blank">post</a>), but we corrected only texts that had more than 15 errors. Also, we removed symbols that were not in the original train set and normalized the encodings of the remaining symbols.</p> <p>4 models were trained on 11k selected generated/rephrased (essay-level and sentence-level)/partially rephrased essays; some of the essays are from shared datasets, and some are custom-generated using several LLMs. <br> We selected training samples using the following algorithm:</p> <ul> <li>Train the initial model using <a href="https://www.kaggle.com/alejopaullier" target="_blank">@alejopaullier</a> <a href="https://www.kaggle.com/datasets/alejopaullier/daigt-external-dataset" target="_blank">data</a></li> <li>At each iteration, add samples that the previous model failed to predict correctly - 500 human-written and 500 generated, with the highest distance from the true label.</li> <li>Train a new model and repeat again</li> </ul> <p>We evaluated each 4-th iteration on an LB. Once LB stopped improving we took the previous best dataset. A best single model trained on this data has a 0.927 public and 0.845 private score.</p> <p>Inspired by <a href="https://www.kaggle.com/jsday96" target="_blank">@jsday96</a> <a href="https://www.kaggle.com/competitions/llm-detect-ai-generated-text/discussion/465882" target="_blank">post</a> we generated continuation for pile and slimpajama datasets. We filtered out text that was too short/too long, contained code or math, non-English text, and had a high non-letters/letters ratio. Then we used ~35 different open-source models with different combinations of parameters. We split sampling parameters into 3 scenarios depending on the temperature value and used random values for top_p/min_p and presence_penalty/frequency_penalty within bounds specified for each scenario. <br> We've trained 3 models using 500k, 1m, and 1.2m samples generated this way. All models were trained with default hyperparameters, max length 256 (1512 for inference), and high batch size - 48. The best single model trained with ~1m samples and has a 0.956 public and 0.967 private score.</p> <p>We also finetuned 5 models on the selected 11k dataset (weights are from the models trained on 500k+). The public LB for these models was slightly higher, but private worser by ~0.005. </p> <h3>Tfidf Pipeline</h3> <p>We took one of the earliest public notebooks (<a href="https://www.kaggle.com/code/hubert101/0-960-phrases-are-keys?scriptVersionId=153589869" target="_blank">link</a>) and made a few adjustments. </p> <ul> <li>Increased catboost and lightgbm number of iterations by 250, and used weights=[0.05, 0.225, 0.225, 0.5] for voting classifier</li> <li>Added 1k pseudo from the test set to <a href="https://www.kaggle.com/thedrcat" target="_blank">@thedrcat</a> <a href="https://www.kaggle.com/datasets/thedrcat/daigt-v2-train-dataset" target="_blank">dataset</a> - only samples in which the ensemble of transformers was most confident (probabilities lower than 0.01 or higher than 0.99)</li> </ul> <p>With these changes, the public score remained the same, but the private increased from 0.893 to 0.927. <br> Since it was a little gambling game, we selected both - the initial pipeline and the adjusted one, they have 0.970 and 0.974 private scores respectively.</p> <h3>Final Ensemble</h3> <p>We used a weighted average ensemble on probabilities in two steps:</p> <ul> <li>Firstly, we weighted tfidf and models trained on the 11k dataset - only the samples there transformers predictions were lower than 0.1 or higher than 0.9; for samples in the middle we used just tfidf probs</li> <li>Secondly, we used weighted averages without any conditions for step 1 and models trained on large datasets. </li> </ul> <p>Averaging this way improved both private/public LB and local CV (but it was unreliable though).</p> <h3>Postprocessing</h3> <p>For each prompt_id, if the number of samples there greater than 1000, we fitted umap on tfidfs (the same as in tfidf-catboost pipeline, but per-prompt), calculated distance to 7 closest human-written and 7 generated samples, and scaled predictions by the ratio human_distance / generated_distance with clipping to (0.9, 1.1). It slightly improved public and private LB.</p> <h3>Acknowledgements</h3> <p>I want to say thank you to everyone who shared their ideas/assumptions/datasets. Especially <a href="https://www.kaggle.com/evilpsycho42" target="_blank">@evilpsycho42</a> for your great work during this competition. </p> <h3>Links</h3> <p>Inference: <a href="https://www.kaggle.com/code/evgeniimaslov2/llm-daig-3rd-place-solution?scriptVersionId=160663257" target="_blank">https://www.kaggle.com/code/evgeniimaslov2/llm-daig-3rd-place-solution?scriptVersionId=160663257</a><br> Training: <a href="https://www.kaggle.com/datasets/evgeniimaslov2/llm-daig-src-code" target="_blank">https://www.kaggle.com/datasets/evgeniimaslov2/llm-daig-src-code</a></p> <p>Hi Gregor, thanks! CommonLit was a very painful experience :)</p> <p>Thanks! We tested an ensemble of 8 models - it has 0.959 public/0.967 private (slightly higher than a single model). <br> I would say that the ensemble of several models didn't help much, maybe because weights were poorly optimized </p> <p>Hi. We generated 500-600k samples as a continuation for truncated slimpajama/pile texts. 1m is the concatenation of raw text and generated. </p> <p>With "previous model failed to classify" approach we selected only 11k generated/natural samples from persuade essays</p>
LLM - Detect AI Generated Text
42th solution: BPE + LSTM-Transformer-CNN + DistillRoberta + Superfast Unsupervised
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LLM - Detect AI Generated Text <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Here is my simple solution:</p> <ul> <li><p>Model1: Public 963 with some postprocessing (public LB: 0.963, private LB: 0.894)<br> Base on predicted probability convert top 2% to 1 and bottom 10% to 0</p></li> <li><p>Model2: LSTM-Transformer-CNN model (public LB: 0.927, private LB: 0.788)</p></li> <li><p>Model3: DistillRoberta model (public LB: 0.927, private LB: 0.884)</p></li> <li><p>Model4: <a href="https://www.kaggle.com/code/aerdem4/daigt-superfast-unsupervised-baseline" target="_blank">https://www.kaggle.com/code/aerdem4/daigt-superfast-unsupervised-baseline</a> (Model 4.1) + Model 3 (public LB: 0.956, private LB: 0.914, with some postprocessing can get private LB 0.917):<br> Use the top 9% from model 3 prediction as AI likely part of model 4.1, use the bottom 45% as the student likely part of model 4.1. Then search the top 20 student likely and AI likely to calculate the final probability.</p></li> <li><p>Final result: 0.552<em>Model1+ 0.08</em>Model2 + 0.184 Model3 + 0.184 Model4 (public LB 0.967, private LB: 0.918)</p></li> </ul>
LLM - Detect AI Generated Text
5th place solution: 1.7 million training examples + domain adaptation
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LLM - Detect AI Generated Text <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Sounds like you tried to run the inference notebook without access to the "deberta-large-ls03-ctx1024" dataset, which I just realized I never made publicly accessible. I just updated the dataset settings. You should be able to add all the necessary inputs now…<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3512350%2Faec2f023614c31fe0f636742c19ddd14%2FScreenshot%20from%202024-01-26%2000-15-14.png?generation=1706246152930799&amp;alt=media"></p> <p>Thanks to the competition organizers for creating such an interesting competition! I learned a lot. And congrats to the other winners!</p> <p>Here's an explanation of the 5th place solution…</p> <h1>Overview</h1> <p>Finetuned <a href="https://huggingface.co/microsoft/deberta-v3-large" target="_blank">deberta-v3-large</a> and <a href="https://huggingface.co/state-spaces/mamba-790m" target="_blank">mamba-790m</a> on varying mixtures of the following datasets:</p> <table> <thead> <tr> <th>Dataset</th> <th># Human documents</th> <th># Generated documents</th> </tr> </thead> <tbody> <tr> <td>PERSUADE essays</td> <td>25,996</td> <td>327,268</td> </tr> <tr> <td>Uncopyrighted Pile Completions</td> <td>512,371</td> <td>512,371</td> </tr> <tr> <td>SlimPajama Completions</td> <td>233,146</td> <td>233,146</td> </tr> <tr> <td>Tricky Crawl</td> <td>125,192</td> <td>0</td> </tr> </tbody> </table> <p></p>Exact sampling proportions varied across models in the ensemble, but a reoccurring pattern is that everything except the Pile data was undersampled. The best models used 62 - 99% Pile data. It was our highest quality dataset. The Persuade essays were always 1% of the data mixture.<p></p> <p>Then, at test time, our inference notebook does the following:</p> <ol> <li><strong>Teacher model inference:</strong> The test dataset is labeled with 1 DeBERTa model + 2 Mamba models with a context length of 1024 tokens. Final soft labels are a weighted average with 90% of the weight given to the DeBERTa model. <strong>The use of Mamba hurt us here, we would have gotten 3rd place (0.977) without it.</strong></li> <li><strong>Student model training:</strong> Finetune 2 "short context student" models to imitate the teacher ensemble's predictions based on short randomly selected chunks of the test documents. One student model uses a context length of 128 <em>characters</em> (~32 tokens), the other uses 256 characters. These student models are pretrained on 1 million documents from the datasets above, mostly Pile data, and are both deberta-v3-large based.</li> <li><strong>Student model inference:</strong> The finetuned student models make predictions on overlapping chunks of the test documents with a stride equal to half their context length. Final predictions for each document are the average of all these predictions, with 60% of the weight given to predictions from the model with a context length of 128 characters, 40% to the one with a context length of 256 characters.</li> </ol> <p>My intuition for why this domain adaptation strategy is beneficial is that the short-context student models learn to look for dataset-specific wording quirks that are correlated with whatever the long-context models look for in the broader documents. This strategy was primarily inspired by <a href="https://arxiv.org/abs/2002.05709" target="_blank">SimCLR</a>, but in hindsight I realize it is also similar to <a href="https://arxiv.org/abs/1911.04252" target="_blank">Noisy Student Training</a> and <a href="https://arxiv.org/abs/1904.12848" target="_blank">UDA</a></p> <p>On its own, the full-context DeBERTa model we used scores 0.970 on the private LB, the Mamba models score ~0.957. The final student models score 0.977 without Mamba, 0.972 with Mamba. We didn't pick the best submission as one of our final 3, so we got 5th place instead of 3rd. Also, the DeBERTa model we used wasn't the best… we had one score 0.976 on its own, without domain adaptation.</p> <p><strong>Links:</strong></p> <ul> <li><strong>Inference/domain adaptation notebook:</strong> <a href="https://www.kaggle.com/jsday96/multi-context-students/" target="_blank">https://www.kaggle.com/jsday96/multi-context-students/</a></li> <li><strong>Data:</strong> <a href="https://www.kaggle.com/datasets/jsday96/ai-content-detection" target="_blank">https://www.kaggle.com/datasets/jsday96/ai-content-detection</a></li> <li><strong>Local training &amp; data generation code:</strong> <a href="https://github.com/jday96314/AiContentDetection" target="_blank">https://github.com/jday96314/AiContentDetection</a></li> </ul> <h1>Data</h1> <h2>Things all datasets had in common</h2> <ul> <li>Ran <a href="https://github.com/vllm-project/vllm" target="_blank">vLLM</a> on 2x RTX 3090s + 1x RTX 4090 to generate examples of AI-authored text.</li> <li>Used 4-bit AWQ quantization to conserve GPU memory for any model with &gt;= 13 billion parameters.</li> <li>Sampling temperatures randomly varied from 0 - 2 (used different randomly selected temperature for each vLLM API request).</li> <li>Top-K filter randomly selected from [disabled, 20, 40] for each vLLM API request.</li> <li>Top-P filter randomly varied from 0.5 - 1 for each vLLM API request.</li> <li>Frequency penalty randomly varied from 0 - 0.5 for each vLLM API request.</li> </ul> <h2>Uncopyrighted Pile Completions</h2> <p>The general strategy for creating this dataset was to randomly select documents from <a href="https://pile.eleuther.ai/" target="_blank">The Pile</a>, randomly truncate them, and prompt locally hosted LLMs to generate plausible continuations for the last 25 - 75% of each document. I initially used an old version of the pile which contains copyrighted data and is no longer publicly available, but filtered it to only contain document completions based on subsets of The Pile which are free of copyright restrictions and are <a href="https://huggingface.co/datasets/monology/pile-uncopyrighted" target="_blank">still publicly available</a>. </p> <p>Document completions were generated with the following models. Document counts below are from after the filtering which ran as a post-processing step (and discarded ~16% of the data if I recall correctly).</p> <table> <thead> <tr> <th>Model</th> <th>Document count</th> </tr> </thead> <tbody> <tr> <td>Airoboros-L2-13B-2.1-AWQ</td> <td>46,358</td> </tr> <tr> <td>CodeLlama-34B-AWQ</td> <td>29,885</td> </tr> <tr> <td>falcon-7b</td> <td>11,178</td> </tr> <tr> <td>Llama-2-13B-AWQ</td> <td>61,531</td> </tr> <tr> <td>Llama-2-13B-chat-AWQ</td> <td>43,145</td> </tr> <tr> <td>Llama-2-70B-AWQ</td> <td>31,297</td> </tr> <tr> <td>Mistral-7B-Instruct</td> <td>46,825</td> </tr> <tr> <td>Mistral-7B-v0.1</td> <td>61,851</td> </tr> <tr> <td>mpt-7b</td> <td>12,232</td> </tr> <tr> <td>OpenHermes-2.5-Mistral-7B</td> <td>21,221</td> </tr> <tr> <td>StableBeluga2-70B-AWQ</td> <td>21,046</td> </tr> <tr> <td>WizardCoder-Python-34B-V1.0-AWQ</td> <td>41,716</td> </tr> <tr> <td>WizardLM-70B-V1.0-AWQ</td> <td>41,979</td> </tr> <tr> <td>zephyr-7b-beta</td> <td>42,093</td> </tr> </tbody> </table> <p></p> For foundation models, the prompts were just document prefixes. For models that were instruction or chat tuned, I randomly broke the document prefixes into two parts, used the first part to form "user" instructions requesting a plausible continuation to the document and used the second part as "leading words" in a part of the token sequence that would ordinarily be filled in by the model (after something like a "### Response:" tag which varies a bit depending on what format the model was trained to expect). The leading words help to ensure the models generate plausible looking document completions, without any weird prefixes at the start of their responses or refusals.<p></p> <p>This data was generated in 2 stages. My observations from training models on the early data and the changes I made for the last ~15% are described below.</p> <ul> <li>Models trained on my initial data were worst at identifying text generated with a sampling temperature close to 1. In response, I switched from picking the sampling temperature from a uniform distribution ranging from 0 - 2, to instead using a gaussian distribution centered at 1 with a standard deviation of 0.2, clipped to be in the range 0 - 2. This helped to concentrate more of my hardware resources on the tricky data my classification models were struggling with.</li> <li>My models were best at identifying AI generated text related to the medical subsets of The Pile (PubMed Central &amp; PubMed Abstracts). As a result, I figured adding more medical related text to the training dataset would be a waste of hardware resources and started filtering it out (before the LLM is prompted, not as a post-processing step like what I did to discard the data with copyright issues).</li> <li>My models were worst at identifying AI generated code. This data didn't seem too relevant to the competition, so I started filtering out everything from the GitHub slice of The Pile, similar to what I did for PubMed.</li> </ul> <h2>SlimPajama Completions</h2> <p>The way I created this dataset is very similar to what I did for the the last of the Pile completions, just with source documents from <a href="https://huggingface.co/datasets/cerebras/SlimPajama-627B" target="_blank">SlimPajama</a> instead of The Pile and a slightly different combination of models. This data was generated somewhat preemptively to hopefully mitigate any accuracy lost from filtering out the problematic (copyrighted) parts of The Pile. It didn't really help on the public LB, but might have on the private LB - I'd need to look back at the scores more closely. </p> <p>The mixture of models I used to create this dataset is outlined below.</p> <table> <thead> <tr> <th>Model</th> <th>Document count</th> </tr> </thead> <tbody> <tr> <td>airoboros-l2-70B-gpt4-1.4.1-AWQ</td> <td>23,399</td> </tr> <tr> <td>deepseek-coder-33B-base-AWQ</td> <td>20,718</td> </tr> <tr> <td>dolphin-2.6-mistral-7b</td> <td>24,168</td> </tr> <tr> <td>Mistral-7B-Instruct-v0.2</td> <td>24,813</td> </tr> <tr> <td>Mistral-7B-v0.1</td> <td>24,618</td> </tr> <tr> <td>mixtral-8x7b-v0.1-AWQ</td> <td>24,609</td> </tr> <tr> <td>Mixtral-8x7B-Instruct-v0.1-GPTQ</td> <td>24,581</td> </tr> <tr> <td>Nous-Hermes-2-SOLAR-10.7B-AWQ</td> <td>22,833</td> </tr> <tr> <td>Nous-Hermes-Llama2-AWQ</td> <td>18,741</td> </tr> <tr> <td>SOLAR-10.7B-v1.0-AWQ</td> <td>24,623</td> </tr> </tbody> </table> <p></p>One concern I had about SlimPajama (and why I didn't use it initially) is that it contains text from the internet that was collected <em>after</em> the release of ChatGPT, so there might be some AI-generated text mislabeled as "human" text when I train on it. It is plausible this might be why it seemed to be lower quality than The Pile data.<p></p> <h2>PERSUADE essays</h2> <p>I generated a bit over 300k essays for the same assignments as the human essays in the PERSUADE corpus. These essays were divided into 3 subsets with varying generation strategies:</p> <ul> <li><strong>Basic:</strong> ~137k essays using variety of prompting strategies. 0-shot or 1-shot with ~140 different sets of instructions used to alter the LLM's writing style. Used a combination of 11 different models.</li> <li><strong>Student Imitator:</strong> In an attempt to make the AI-generated essays harder to identify, I finetuned Mistral 7B and Llama 2 13B to imitate the writing style of the students in the PERSUADE corpus. I used the resulting models to generate ~237k essays, ~177k of which were used for training and cross-validation purposes. The rest were discarded due to issues during finetuning that caused the LLMs to output incoherent text.</li> <li><strong>Adversarial:</strong> I adversarially generated ~12k essays that confuse a mixture of 17 "victim classifiers". The main strategy for generating these essays to back-track and try again whenever the in-progress essay starts to sound "too AI generated". However, some LLMs are seemingly incapable of generating introductions that fool my classifiers for certain assignments, so many of these essays begin with authentic human text in order to "get past the intro". However, even with that in place these essays were still ~100x more computationally expensive to generate than "normal" essays, so I don't have many of them.</li> </ul> <p>Unfortunately, these essays were not very useful in comparison to the Pile data. My classifiers seem to rapidly overfit to it and need more diverse data to generalize well.</p> <p>I feel the exact combinations of models &amp; instructions I used would be a bit too much to include here, so I suggest looking at the dataset I linked to in the introduction &amp; my data generation code for details.</p> <h2>Tricky Crawl</h2> <p>This was created by filtering the Common Crawl subset of The Pile (Pile-CC) to pull out human-authored text that was misclassified as AI generated by one of 2 moderately strong general-purpose AI content detection models. One of the classifiers was based on deberta-v3-base, the other on deberta-v3-xsmall.</p> <p>The version of this dataset used to train the models in the final submission was created by filtering 1.5 million documents to pull out the ~125k most confusing to the victim classifiers. </p> <p>The general idea behind this dataset was to reduce the number of false positives by boosting the amount of tricky human text in the training dataset. Mixing this into the training dataset with a 5% sampling proportion (so 5% of the training examples are from this dataset) seemed beneficial for the short-context models, but didn't make much difference for the full-context ones. </p> <h1>Data augmentation</h1> <p>The competition organizer's data seems to be a bit corrupted, so the models in our solution were trained with the following data augmentation steps to make them more robust.</p> <ul> <li><strong>Buggy spell check</strong> similar to the implementation described by <a href="https://www.kaggle.com/deltawi" target="_blank">@deltawi</a> in <a href="https://www.kaggle.com/competitions/llm-detect-ai-generated-text/discussion/456142" target="_blank">this discussion thread</a>. However, I used a regex which seems to handle ' characters in a way more consistent with the competition data and used <a href="https://github.com/bakwc/JamSpell" target="_blank">jamspell</a> instead of pyspellchecker because jamspell is faster. This executes with 70% probability for persuade documents, 20% for all other datasets.</li> <li><strong>Blacklisted character removal</strong> with the same blacklist suggested by <a href="https://www.kaggle.com/nbroad" target="_blank">@nbroad</a> in <a href="https://www.kaggle.com/competitions/llm-detect-ai-generated-text/discussion/452172" target="_blank">this discussion thread</a>. This executes with 70% probability for persuade documents, 20% for all other datasets.</li> <li><strong>Typos are added.</strong> More specifically…<ul> <li>All typos of typos supported by the <a href="https://github.com/ranvijaykumar/typo" target="_blank">typo library</a> have 2 opportunities to be added to all training examples with 10% probability each time.</li> <li>Capitalization of random characters is flipped. 2 opportunities to decapitalize an upper case letter, 1 opportunity to capitalize a lower case letter. All these "opportunities" for typos to be added have a 10% occurring for each document.</li></ul></li> </ul> <h1>Mamba vs. DeBERTa</h1> <h2>Efficiency</h2> <p><code>mamba-790m</code> is roughly as fast as <code>deberta-v3-large</code> and consumes less GPU memory during training despite having over 2x as many parameters. It seems to have some major efficiency advantages due to being a structured state space model instead of a transformer.</p> <h2>Accuracy (and stability problems)</h2> <p><code>mamba-790m</code> did <em>almost</em> as well as DeBERTa during local CV on the Pile data (0.001 - 0.002 lower) and <em>almost</em> as well on the public LB (~0.003 lower than an "unlucky" DeBERTa run, ~0.006 lower than a "good" DeBERTa run). The main difference arose on the private LB. Mamba's score staid roughly the same from public to private while most DeBERTa models jumped by ~0.01. As a result, Mamba seemed to be slightly useful on the public LB when included in an ensemble with low weight, but wound up dragging us down on the private LB.</p> <p>I think the weakness of Mamba here wasn't really due to architectural flaws or because structured state space models are inherently inferior to transformers, but rather because this was my first time using Mamba and I rushed to train it in the last 3-4 days of the competition. Additionally, the <a href="https://github.com/state-spaces/mamba" target="_blank">mamba-ssm</a> library seems to be primarily intended for text generation, not classification, so it isn't super straightforward to use for workloads like this.</p> <p>To elaborate on the <strong>mistakes I made</strong> when training it, I generally initialized the Mamba model like</p> <pre><code>model = from model.lm_head = nn. </code></pre> <p>which causes it to outputs predictions with shape (batch size, token count, 2). During early training runs, I then used the output from the last token in each sequence as class label predictions using an operation like</p> <pre><code> = output.logits[:, -, :] </code></pre> <p>The problem with this approach is that the token sequences are padded and the padding was impacting the predictions. Getting predictions from the last token before the start of padding with an operation like</p> <pre><code>last_token_indices = torch(attention_masks(dim = ) - , min = ) raw_predictions = torch( logits, dim=, index=last_token_indices()()(-, -, logits) )() </code></pre> <p>seemed to increase the optimal learning rate by a factor of roughly 8x and allowed for much faster convergence to a more accurate model during relatively small-scale training runs with a single epoch over "only" 100,000 example documents. However, it was unstable when I attempted to scale up to more data with a learning rate that high. I didn't really have time to train it properly after figuring this out. As a result, the 2 Mamba models used by the 5th place solution were trained using the following non-ideal setups:</p> <ul> <li>1.25 million training documents with the original logit selection approach ([:, -1, :]) and a relatively low learning rate (2e-6 with a batch size of 4). This training run executed in the background while I used other GPUs to figure out how to train Mamba "properly".</li> <li>350,000 training documents with the max lr set to a problematically high value (1.6e-5) for the first ~70% of training, followed by repeated issues in which the loss became nan and the model started outputting garbage, thereby requiring repeated restarts from old checkpoints with much lower learning rates. I wound up manually dropping the learning rate all the way down to 5e-7 in the last 30% of training and did not have time to train it on as much data as most other models. I have a suspicion I would have gotten better results if I started the run from the beginning with a well-configured learning rate schedule and used a larger amount of data.</li> </ul> <h1>Additional training details</h1> <ul> <li><strong>Dropout:</strong> Totally disabled for all models. <ul> <li>Encountered <strong>strange behavior</strong> in which if I trained the DeBERTa models with Huggingface transformers library's default dropout rates, they were consistently more accurate at inference time if I left dropout enabled at inference time (i.e. leaving the model in "train" mode instead of switching to "eval" mode). I suspect this is because DeBERTa uses non-standard <code>StableDropout</code> layers that appear to do some sort of internal normalization that the model was learning to rely on, thereby forcing me to keep dropout enabled for best results. Needless to say, randomly dropping-out connections in the model at test time isn't good for accuracy. I got better results with dropout disabled both at training time and at test time, even in comparison to averaging across multiple forward passes through a model with dropout enabled.</li> <li>The <code>mamba-ssm</code> library doesn't support dropout.</li> </ul> </li> <li><strong>Learning rate schedule:</strong> Linear warm-up followed by linear decay. Generally warmed up for the first 5% of training during the local training runs, 30% during domain adaptation at test time. <strong>Longer warmup during domain adaptation seemed to make it more stable</strong>, with good results more consistently, but I didn't experiment with this much.</li> <li><strong>Batch size:</strong> For the models with a context length of 1024 tokens, I used a batch size of 2 for DeBERTa, 4 for Mamba, primarily because <i>Mamba is more memory efficient</i>. I generally used a batch size of 16 for the shorter context models used in domain adaptation (32 - 96 tokens).</li> <li><strong>Data mixtures:</strong> <ul> <li><strong>DeBERTa model with 1024 token context:</strong> 99% Pile, 1% Persuade <ul> <li>The other datasets didn't seem to help on the public LB with long-context models, but our best single model on the private LB scored 0.976 with the data mixture used for the final submission's shorter-context DeBERTa models.</li> </ul> </li> <li><strong>Short-context "student" DeBERTa models:</strong> 62% Pile, 32% Pajama, 5% Tricky Crawl, 1% Persuade</li> <li><strong>Mamba:</strong> 77% Pile, 20% Pajama, 2% Tricky Crawl, 1% Persuade</li> </ul> </li> <li><strong>Data volumes:</strong> Typically just did a single epoch with some subsets of the available data undersampled. <ul> <li><strong>Full-context DeBERTa:</strong> 1.4 million training documents (too impatient to wait &gt;2 days for it to train on more than this and doubt it would help much without an increase in <i>data diversity</i>)</li> <li><strong>Short-context DeBERTa:</strong> 1 million training documents (these models are finetuned at test time, so I didn't see any benefit pretraining longer)</li> <li><strong>Mamba:</strong> 350k - 1.25M training documents (would have trained on more if I had time - see section about DeBERTa vs. Mamba for rant about how this was rushed and non-ideal)</li> </ul> </li> </ul> <h1>Alternatives we experimented with</h1> <ul> <li>1D conv ResNets (scored &lt;= 0.87 on the LB, but EXTREMELY fast, ~10 minutes to evaluate on the leaderboard without GPU acceleration - might have scored better if I scaled it up, but DeBERTa gave better results right off the bat)</li> <li>Training deep learning based solutions using only Persuade related data (they didn't generalize well to the LB data - scaling up to bigger transformers doesn't help with this)</li> <li>Ensembling DeBERTa models with tf-idf (scored well on the public LB but got wrecked during the shakeup) </li> <li>Smaller DeBERTa models (they don't score as well)</li> <li>Just using models with smaller context windows (they don't score as well)</li> <li>Tried an iterative pseudolabeling strategy similar to the one used in <a href="https://www.kaggle.com/asalhi" target="_blank">@asalhi</a>'s 21st place solution (<a href="https://www.kaggle.com/competitions/llm-detect-ai-generated-text/discussion/470148" target="_blank">https://www.kaggle.com/competitions/llm-detect-ai-generated-text/discussion/470148</a>). Discarded this due to big random score variations that seemed to depend on how the training/test data is shuffled. The domain adaptation strategy used in our 5th place solution scores well more consistently.</li> </ul> <blockquote> <p>What was your insight to select the datasets?</p> </blockquote> <ul> <li>Used the PERSUADE corpus because it is similar the competition organizer's data. Then migrated to larger, more diverse, datasets to mitigate overfitting.</li> <li>Mostly used The Pile because I had it on hand and it is old enough to not be infected by text from ChatGPT.</li> <li>Switched over to SlimPajama because it is under a less murkey license than The Pile.</li> <li>The "Tricky Crawl" data is really just a subset of Pile-CC.</li> </ul> <blockquote> <p>How did you remove noises in the data?</p> </blockquote> <p>My attitude was to "embrace the noise", not to filter it out. I had a suspicion the secret test data was somewhat corrupted and/or adversarial in nature, so I wanted my models to be robust against <em>whatever the competition organizers feed in</em>.</p> <p>I actually did some data augmentation to inject MORE noise into the training text, I didn't do anything to clean it. I'll update the original post to describe the data augmentation shortly.</p> <p>I did not train with contrastive loss. The domain adaptation strategy was only <em>loosely</em> inspired by SimCLR. I effectively just imitated the idea that you should train models so that random cropping doesn't impact the results, nothing else.</p> <p>Yes. The best teacher model scored 0.970 with normal inference code, no student models or ensemble.</p> <p>Two main reasons:</p> <ol> <li>I was fascinated by the efficency advantages claimed in <a href="https://arxiv.org/ftp/arxiv/papers/2312/2312.00752.pdf" target="_blank">https://arxiv.org/ftp/arxiv/papers/2312/2312.00752.pdf</a> and wanted to try it for myself.</li> <li>I figured that since it isn't a transformer it might have different inductive biases than my DeBERTa models, thereby causing its errors to be less correlated with the ones made by my DeBERTa models, which would help make it more useful in an ensemble.</li> </ol> <p></p><p>As for why I chose <code>mamba-790m</code>, specifically, as opposed to other sizes, it was mostly due to memory usage and time constraints. I was rushing to try it out in the last several days of the compitition, so I chose the max size I thought my GPUs could handle given the time constraints and need for experimentation.</p> <p>It was largely a response to the difficulties getting transformer models to generalize well from the Persuade corpus to the secret test data. Migrating to a larger amount of more diverse data helped a ton, but I eventually hit a wall in which further scaling didn't seem to help to any more. I seemed to be overfitting to The Pile and couldn't easily get my hands on vastly more diverse data than that, so I started looking into unsupervised domain adaptation as a (somewhat desperate) way to better generalize across datasets.</p> <blockquote> <p>Mistral-7b</p> </blockquote> <p>Just to be sure there are no miscommunications, I used <a href="https://huggingface.co/state-spaces/mamba-790m" target="_blank">Mamba</a>, not Mistral :-). They're very different architecturally (structured state space model vs. transformer).</p> <p>As for Mistral, this is only briefly alluded to near the end of my post, but two of my teammates ( <a href="https://www.kaggle.com/poskok" target="_blank">@poskok</a> and <a href="https://www.kaggle.com/domdejonge" target="_blank">@domdejonge</a>) experimented with larger 7 billion parameter models like Mistral. Those models didn't score as well (both public &amp; private), presumably because they used a smaller amount of less diverse training data.</p>
LLM - Detect AI Generated Text
6nd place solution with code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LLM - Detect AI Generated Text <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>An LLM tends to predict the words generated by a high LLM with a higher probability than those written by a human. For this reason, I used entropy-based features.<br> I selected the best features on DAIGT-V4-TRAIN-DATASET and I do not know why these 5 features are better than others.</p> <p>I used the formula of Information content (surprisal). The logits in position 0 are relative to the probabilities of token in position 1 and so on. Therefore, the last logits are relative to a not-available token.<br> In the code, you can also see:<br> <code>tokens = input_ids[:, 1:]</code></p> <p>I have previously observed in different contexts that training exclusively on real data leads to better generalization.<br> <a href="https://arxiv.org/abs/1808.08396" target="_blank">https://arxiv.org/abs/1808.08396</a><br> <a href="https://arxiv.org/abs/2012.02512" target="_blank">https://arxiv.org/abs/2012.02512</a></p> <p>Many thanks to Kaggle and the organizers for creating the competition.</p> <p>Link to training and inference code: <a href="https://www.kaggle.com/code/davidecozzolino/coder-one2" target="_blank">https://www.kaggle.com/code/davidecozzolino/coder-one2</a> <br> Link to github repository: <a href="https://github.com/davin11/entropy-based-text-detector" target="_blank">https://github.com/davin11/entropy-based-text-detector</a><br> Link to model summary documnt: <a href="https://github.com/davin11/entropy-based-text-detector/blob/main/Documentation.pdf" target="_blank">https://github.com/davin11/entropy-based-text-detector/blob/main/Documentation.pdf</a></p> <p>Solution:</p> <ol> <li>A pre-trained Large Language Model (LLM) is used to compute entropy-based synthetic features.</li> <li>Starting from feature vectors of few elements, a One-Class SVM is trained using only the human-written essays provided by the organizers as training-set.</li> </ol> <p>Note:</p> <ul> <li>I used <a target="_blank">DAIGT-V4-TRAIN-DATASET</a> to select the best features.</li> <li>I tried different LLMs; phi-2 proved to be the best</li> </ul> <p>I did not train the LLM. I used an already trained LLM <a href="https://huggingface.co/microsoft/phi-2" target="_blank">https://huggingface.co/microsoft/phi-2</a><br> I do know if there will be a winners' call presentation.</p> <p>Hi Chan,</p> <p>you can find a report about results in <a href="https://github.com/davin11/entropy-based-text-detector/blob/main/Documentation.pdf" target="_blank">this document</a>.</p> <p>you can also see the several verisons of this two notebooks:<br> <a href="https://www.kaggle.com/code/davidecozzolino/coder-one?scriptVersionId=158812092" target="_blank">https://www.kaggle.com/code/davidecozzolino/coder-one?scriptVersionId=158812092</a><br> <a href="https://www.kaggle.com/code/davidecozzolino/coder-one2?scriptVersionId=158905406" target="_blank">https://www.kaggle.com/code/davidecozzolino/coder-one2?scriptVersionId=158905406</a></p> <p>in these notebooks, the variable <code>dict_llm</code> sets the LLM and the variable <code>feats_list</code> sets the used features.</p>
LLM - Detect AI Generated Text
98th Place Solution for the DAIGT Competition: TF-IDF + fastText + Distilbert + pseudolabels + instance weighting
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LLM - Detect AI Generated Text <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>(public LB best: .970; private selected best: .909; private unselected best: .923)</p> <h1>Context</h1> <p>I had a bunch of time on my hands in December and January. I came across the <a href="https://www.kaggle.com/competitions/llm-detect-ai-generated-text/overview" target="_blank">LLM Detect AI-Generated Text competition</a> and found the prospect very appealing. Over those last couple of months, I had the time to try a lot of things; some worked, a lot failed. <br> I finish at rank ~100, after having reached 15th on public LB in early Jan. But I am happy with a solo silver. I loved every moment; it has been a thrilling competition, and I look forward to the next!</p> <h1>Overview</h1> <h3>1. Data</h3> <ul> <li>My final submission used much exclusively the <a href="https://www.kaggle.com/datasets/thedrcat/daigt-v2-train-dataset" target="_blank">daigt V2 train dataset</a>, from which I excluded 'mistral7binstruct_v2' and 'llama2_chat' as they were harming my public score. I played a bit with other data sources but they did not help.</li> <li>I used <a href="https://www.kaggle.com/piotrkoz" target="_blank">@piotrkoz</a>'s library for typo preprocessing, and it helped my score a bit, probably by ~.001. Thanks a lot for sharing!</li> </ul> <h3>2. Models</h3> <ul> <li>The bulk of my ensemble (~75%) was inspired by the public LB TF-IDF approach and models. I improved this a bit, but in the end it is not substantially different from the published work. I included the canonical MNB, SGD, LGBM and Catboost. Credits go to <a href="https://www.kaggle.com/datafan07" target="_blank">@datafan07</a> for sharing this strong approach 🙏</li> <li><a href="https://fasttext.cc/" target="_blank">fastText</a>. FastText alone did not perform great (my best pure fastText run was at .85 on public LB, and surprisingly only .70 on private); however it consistently gave a good boost (maybe ~.002 AUC) to ensembles in which it was mixed. [-&gt; 12% of final weight]</li> <li><a href="https://huggingface.co/distilbert-base-uncased" target="_blank">DistilBERT</a>. I fine-tuned the pre-trained model for 5 epochs; again, this alone did not perform very well on public or private LB, but still provided a nice boost to ensembles. [-&gt; 12% of final weight]</li> </ul> <h3>3. Pseudo-labeling</h3> <ul> <li>After a first model fit, I added the bottom and top ~10% of predictions to the training data to run a second model fit. This worked fairly well, giving me a ~.002 boost in public LB.</li> </ul> <h3>4. Adversarial instance weighting</h3> <ul> <li>I trained a (fastText) model to predict whether instances come from the train set or the test set. I then gave a slightly lower weight to instances from the train set that the model deems the least likely to belong to the test set. This gave me only a small further boost in public LB; but it now looks like private LB responded well to this.</li> </ul> <h3>5. Things that regrettably didn't work on public LB</h3> <p>and consequently did not make it to the final submission:</p> <h4>Canine</h4> <p>I played a lot with a character-level encoder called <a href="https://huggingface.co/google/canine-c" target="_blank">Canine</a>. My intuition was that a model operating at character-level would skip the tokenization step, and therefore possibly be somewhat robust to whatever obfuscation occurred on the test set. Unfortunately, this did not perform as well as DistilBERT on public LB, so I did not keep it; however, I can now see that my best Canine did a bit better than my best DistilBERT on private LB.</p> <h4>Prompt ID</h4> <p>I tried to add a categorical feature indicating to the model which prompt an essay was written against (potentially useful only for pseudo-labelled data). My public LB score responded very poorly to this, dropping to .93, so I ran away from the idea and never looked back; however the private LB now looks like it actually kinda liked the addition, as it gave me one of my best scores (private LB .916).</p> <h4>Rank averaging</h4> <p>Again a thing that my public LB score hated, but which would have actually helped my private score (public LB drop by .007; private LB boost by .013)</p> <p>And a big bunch of other things that worked neither on public nor private LB 😅</p> <h3>6. Runtime</h3> <p>My best submissions took about 6 to 8 hours to run (the number of Catboost iterations being the adjustment variable). I gave up on the efficiency leaderboard idea early on.</p> <p>Sure!</p> <ul> <li>I considered the predicted probability for each test essay of being AI generated.</li> <li>I took the 'bottom' 10% of test essays (the ones most likely to be human written), assigning them pseudo-label 0</li> <li>I took the 'top' 10% of test essays (the ones most likely to be AI-generated), assigning them pseudo-label 1.</li> <li>I then reran all the models (including some preprocessing e.g. TFIDF vectorization) with this new training set (original training data + pseudo-labelled data), and used that to score the 80% remaining test data.</li> </ul> <p>I took this quantile-based approach rather than probability thresholds because I wanted to control how many essays ended up in the pseudo-labelled training set. But as a result, I have no idea what the actual cutoff predicted probabilities were.</p>
LLM - Detect AI Generated Text
A kaggle newbie's 🥈23rd solution: Just follow-up excellent public works & ideas
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LLM - Detect AI Generated Text <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to the host and all kagglers with their sharing of excellent works and ideas! As a kaggle novice, I'm very fortunate to get a silver place in my first competition. Although it's a bit pity that I didn't achieve many groundbreaking original work during my trials, I'm willing to share my final submission and choices.</p> <p>Here's my <a href="https://www.kaggle.com/code/spraut23333/daigt-public-0-965-private-0-927" target="_blank">solution</a></p> <h3>Details of Solution</h3> <p><strong>1. Data</strong></p> <p><a href="https://www.kaggle.com/thedrcat" target="_blank">@thedrcat</a> 's great <a href="https://www.kaggle.com/datasets/thedrcat/daigt-v2-train-dataset" target="_blank">daigt-v2 dataset</a> and <a href="https://www.kaggle.com/carlmcbrideellis" target="_blank">@carlmcbrideellis</a> 's <a href="https://www.kaggle.com/datasets/carlmcbrideellis/llm-mistral-7b-instruct-texts" target="_blank">mistral-7b dataset</a> with prompts <a href="https://www.kaggle.com/competitions/llm-detect-ai-generated-text/discussion/467820" target="_blank">mentioned by</a> <a href="https://www.kaggle.com/bianshengtao" target="_blank">@bianshengtao</a> excluded. </p> <p>Data augmentation with <a href="https://www.kaggle.com/aerdem4" target="_blank">@aerdem4</a> 's <a href="https://www.kaggle.com/code/aerdem4/daigt-superfast-unsupervised-baseline" target="_blank">unsupervised method</a> to generate pseudo test label for a small subset of test data. </p> <p><strong>2. Tokenization &amp; TF-IDF vectorization</strong></p> <p>I followed the <a href="https://www.kaggle.com/code/datafan07/train-your-own-tokenizer" target="_blank">general pipeline</a> of training BPE tokenizer on the test data by <a href="https://www.kaggle.com/datafan07" target="_blank">@datafan07</a> and chose <a href="https://www.kaggle.com/code/verracodeguacas/sentencepiece-constructions" target="_blank">SentencePiece tokenizer</a> for final submission as introduced by <a href="https://www.kaggle.com/verracodeguacas" target="_blank">@verracodeguacas</a> .</p> <p><strong>3. Text correction</strong></p> <p>I have seen discussions about the license of different text-correction libraries,but I'm still unclear about the permission for usage of these tools since haven't seen a clear claim from the host. My final solutions include text correction on the test data with autocorrect, which is LGPL-3.0 license. But I also chose a version without any text correction for submission and it still stands on silver zone with private LB 0.923. </p> <p><strong>4. ML-Models for TF-IDF features</strong></p> <p>MultiNomialNB + SGDClassifier(linearSVC) + LightGBM(GBDT) + LightGBM(DART with GOSS) + CatBoost</p> <p>Focal loss with different gammas for the two lightgbm tree models.</p> <p>Some model parameters inspired by <a href="https://www.kaggle.com/batprem" target="_blank">@batprem</a> 's <a href="https://www.kaggle.com/code/batprem/llm-daigt-analyse-edge-cases" target="_blank">generous sharing</a>.</p> <p><strong>5. Transfromer-based Model</strong></p> <p>I added <a href="https://www.kaggle.com/mustafakeser4" target="_blank">@mustafakeser4</a> 's <a href="https://www.kaggle.com/code/mustafakeser4/inference-detectai-distilroberta-0-927" target="_blank">DistilRoBerta</a> to my ensemble in the last few days before DDL and get an instant boost from 0.962 to 0.965 on public LB, which gave me much confidence and motivation.</p> <p><strong>6. Other tricks</strong></p> <p>Fixing the max features of TF-IDF to 5M.</p> <p><a href="https://www.kaggle.com/competitions/llm-detect-ai-generated-text/discussion/468363" target="_blank">Post processing</a> method by <a href="https://www.kaggle.com/hyunsoolee1010" target="_blank">@hyunsoolee1010</a> .</p>
LLM - Detect AI Generated Text
Comprehensive 1st Place Write-Up
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LLM - Detect AI Generated Text <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Our team is still a bit stunned that we got 1st place (🤯). <a href="https://www.kaggle.com/competitions/llm-detect-ai-generated-text/discussion/470121" target="_blank">Raja posted earlier with a short summary</a>, but here is a more detailed look at our solution and what we think made a difference.</p> <h1>tldr</h1> <p>The hosts did not reveal much information about what the generated essays would be like, so we aimed to create a diverse set of generated essays in hopes that it would (1) make a model that generalizes and (2) have data similar to the private leaderboard. The modeling approach was less important, as we had multiple single models in the 0.970+ range due to the quality of the dataset. </p> <h1>Overview</h1> <ul> <li><a href="#datamix">Datamix</a></li> <li><a href="#models">Models</a></li> <li><a href="#ensemble">Ensemble</a></li> </ul> <h1>Datamix</h1> <p>Our datamix was created in an incremental way focusing on size, diversity and complexity to facilitate good generalization capabilities and strong resistance against adversarial examples. For each datamix iteration, we attempted to plug blindspots of the previous generation models while maintaining robustness.</p> <p>To maximally leverage in-domain human texts, we used the entire Persuade corpus comprising all 15 prompts. We also included diverse human texts from sources such as OpenAI GPT2 output dataset, <a href="https://github.com/scrosseye/ELLIPSE-Corpus" target="_blank">ELLIPSE corpus</a>, NarrativeQA, wikipedia, NLTK Brown corpus and IMDB movie reviews.</p> <h3>Sources for our generated essays can be grouped under four categories:</h3> <ol> <li>Proprietary LLMs (gpt-3.5, gpt-4, claude, cohere, gemini, palm)</li> <li>Open source LLMs (llama, falcon, mistral, mixtral)</li> <li>Existing LLM generated text datasets<ul> <li><a href="https://www.kaggle.com/datasets/conjuring92/fpe-processed-dataset?select=mlm_essays_processed.csv" target="_blank">Synthetic dataset made by T5</a></li> <li><a href="https://www.kaggle.com/datasets/thedrcat/daigt-v2-train-dataset" target="_blank">DAIGT V2 subset</a></li> <li><a href="https://github.com/ryuryukke/OUTFOX" target="_blank">OUTFOX</a></li> <li><a href="https://github.com/vivek3141/ghostbuster-data" target="_blank">Ghostbuster data</a></li> <li><a href="https://github.com/openai/gpt-2-output-dataset" target="_blank">gpt-2-output-dataset</a></li></ul></li> <li>Fine-tuned open-source LLMs (mistral, llama, falcon, deci-lm, t5, pythia, BLOOM, GPT2). For LLM fine-tuning, we leveraged the <a href="https://github.com/scrosseye/persuade_corpus_2.0" target="_blank">PERSUADE corpus</a> in different ways:<ul> <li>Instruction tuning: Instructions were composed of different metadata e.g. prompt name, holistic essay score, ELL status and grade level. Responses were the corresponding student essays.</li> <li>One topic held out: LLMs fine-tuned on PERSUADE essays with one prompt held out. When generating, only the held out prompt essays were generated. This was done to encourage new writing styles.</li> <li>Span wise generation: Generate one span (discourse) at a time conditioned on the remaining essay.</li></ul></li> </ol> <p>For fine-tuning LLMs we used autotrain and custom code based on the transformers library. </p> <p>We used a wide variety of generation configs and prompting strategies to promote diversity &amp; complexity to the data. Generated essays leveraged a combination of the following:</p> <ul> <li><a href="https://huggingface.co/blog/introducing-csearch" target="_blank">Contrastive search</a></li> <li>Use of Guidance scale, typical_p, suppress_tokens</li> <li>High temperature &amp; large values of top-k </li> <li>Prompting to fill-in-the-blank: randomly mask words in an essay and asking LLM to reconstruct the original essay (similar to MLM)</li> <li>Prompting without source texts</li> <li>Prompting with source texts</li> <li>Prompting to rewrite existing essays</li> </ul> <p>Finally, we incorporated augmented essays to make our models aware of typical attacks on LLM content detection systems and obfuscations present in the provided training data. We mainly used a combination of the following augmentations on a random subset of essays:</p> <ul> <li>Spelling correction</li> <li>Deletion/insertion/swapping of characters</li> <li>Replacement with synonym </li> <li>Introduce obfuscations</li> <li>Back translation</li> <li>Random capitalization</li> <li>Swap sentence</li> </ul> <p>As a minor detail, we created a heavily pre-processed version (removed special characters, normalized whitespace, and changed to all lowercase) of our datamix. Our hypothesis was a model trained on this version would learn deeper patterns and thus make a significant contribution in ensembling. Furthermore, the heavy pre-processing would reduce the risk of random character attacks in the hidden test set.</p> <p>Our best performing models were trained on 160k samples (without pre-processing), out of which 40k were human written.</p> <h2>Models</h2> <p>We adopted the following modelling strategies:</p> <ul> <li>LLM (Q)LoRA fine-tuning: Mistral 7b</li> <li>Deberta-v3<ul> <li>Classification</li> <li>Custom Tokenizer + MLM + Pseudo label </li> <li>Ranking</li></ul></li> <li>Ghostbuster (llama 7b, tiny llama 1.1B)</li> <li>Ahmet’s Unsupervised Approach</li> </ul> <p>We hypothesize that our modelling strategies themselves had a lesser impact on the overall performance as compared to the datamix. Each of our individual models would have been in the gold medal range.</p> <h3>LLM (Q)LoRA fine-tuning: Mistral 7b</h3> <p>We fine-tuned the mistralai/Mistral-7B-v0.1 backbone using (Q)LoRA with config provided below on our carefully curated datamix.</p> <pre><code>peft_config = LoraConfig( =64, =16, =0.1, =, =TaskType.SEQ_CLS, =, target_modules=[“q_proj”, “k_proj”, “v_proj”, “o_proj”] ) </code></pre> <p>We also used <a href="https://github.com/huggingface/autotrain-advanced" target="_blank">autotrain-advanced</a> to train some LLMs for text generation.</p> <h3>Deberta</h3> <h4>Classification: deberta-v3-large</h4> <p>This uses the AutoModelForSequenceClassification in transformers<br> BCE loss with continuous labels (e.g. essays generated with instruction-tuned LLMs assigned less than 1 score)</p> <h4>Custom Tokenizer + MLM + Pseudo label: deberta-v3-small</h4> <ul> <li>(<a href="https://www.kaggle.com/competitions/llm-detect-ai-generated-text/discussion/458522" target="_blank">Inspiration</a>)</li> <li>Derive a custom tokenizer based on train + test essays</li> <li>Train a small model using Masked Language Modeling (MLM) on the train + test essays</li> <li>Enrich training datamix with pseudo labeled test essays</li> <li>Train for one epoch, followed by Inference on test essays</li> <li>Intuition: these steps would provide a unique advantage to the resulting model due to a specialized understanding of the hidden test set.</li> </ul> <h4>Ranking: deberta-v3-large</h4> <p>Pairwise loss with margin</p> <h3>Ghostbuster</h3> <p>(<a href="https://github.com/vivek3141/ghostbuster" target="_blank">original repo here</a>)<br> This approach uses the token probs from two small-ish LLMs (originally davinci and ada from OpenAI) as well as the token probs for unigram and trigram models.<br> The sequence probs go through vector operations when mixing them across models (e.g. divide llm1 probs with llm2 probs, subtract unigram from trigram) and then into a scalar operation (take the max prob of the sequence)<br> Instead of using models via the OpenAI API, we used Llama 7b and Tiny Llama 1.1B (the models must have the same tokenizer)<br> The unigram and trigram models were trained using the Ghostbuster repo's code on the nltk Brown corpus using the Llama tokenizer.<br> The first 25 tokens in the sequence are ignored, as the models would not have much context to generate meaningful representations.</p> <p>We saw a small boost when fine-tuning the tiny llama model on texts from the PERSUADE corpus<br> Instead of doing the structured search across all operations, we did 10 operations of each of the four model sequence probs (llama 7b, tiny llama, unigram, trigram) as well as the same 10 operations on the ratio of the following models: llama 7b/tiny llama, llama 7b/unigram, llama 7b/trigram, tiny llama/unigram, tiny llama/trigram, unigram/trigram. <br> The ten operations were: min, max, mean, median, 10% quantile, 25% quantile, 75% quantile, 90% quantile, L2 norm, and variance<br> All together this is 100 features<br> Instead of doing logistic regression, we used an ensemble of an SVM classifier and a Random Forest Classifier. <br> We did not do much tuning to hyperparameters.<br> Rapids was used to train significantly faster than sklearn</p> <h3><a href="https://www.kaggle.com/competitions/llm-detect-ai-generated-text/discussion/465156" target="_blank">Ahmet’s Unsupervised Approach</a></h3> <p>We implemented a variation of this approach by modifying the logic for weak labelling and final score computation.<br> We used confident predictions from our mistral model for weak labelling<br> For a given essay, we first picked neighboring human &amp; generated essays based on a dynamic threshold. Thereafter, we used mean aggregation (instead of min aggregation in the original work) of similarity scores over selected essays.</p> <h2>Ensemble</h2> <p>We used the rankings, rather than the raw prediction values when combining the predictions. The minimum scoring text gets a rank of 1, and the maximum scoring text gets a rank of n, where n is the number of essays in the test set. These ranks are averaged between models and the averaged rank becomes the final value in the “generated” column. Minor weighting was done based on public LB and intuition. We applied the most weight to mistral-7b models.</p> <p>Our highest scoring selected submission is <a href="https://www.kaggle.com/code/nbroad/r100-ensemble" target="_blank">available here</a></p> <p>Our code is <a href="https://github.com/rbiswasfc/llm-detect-ai" target="_blank">available here</a></p> <p>Huge shoutout to Raja (@conjuring92) and Udbhav (@ubamba98) for their great contributions!! 👏👏👏</p> <p>Yes we generated more data</p> <p><a href="https://www.kaggle.com/jackawade" target="_blank">@jackawade</a> <br> Datamix here: <a href="https://www.kaggle.com/datasets/conjuring92/ai-mix-v26" target="_blank">https://www.kaggle.com/datasets/conjuring92/ai-mix-v26</a></p> <p>Each person spent roughly 50-150 hours </p>
LLM - Detect AI Generated Text
KeyboardAug - Private LB 0.935 Public LB 0.929
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LLM - Detect AI Generated Text <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks for the encouragement!</p> <p>I see!</p> <p>I don't have a good explanation. It seems that the distribution of the public test set was a bit different than the distribution of the private test set.</p> <p>One of my team's submissions scored 0.935 in the private LB. We were quite surprised to find this, as it scored only 0.929 in the public LB.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1191272%2F79d73b978df22e24b22c5518f8ffc543%2FKeyboardAug.png?generation=1706085322427282&amp;alt=media"></p> <p>Selecting this submission for final scoring would have put us in a gold medal position. Unfortunately, we didn't end up selecting it, and ended up in a bronze medal position. (We didn't have a good local validation scheme, so there was no way of getting a hint that this particular submission scored this high in the private LB.)</p> <p>What worked was a simple use of <code>KeyboardAug</code> (on the AI generated texts of DAIGT V2):</p> <pre><code>import nlpaug.augmenter.char as nac aug = nac.KeyboardAug(=0.05, =0.05, =None, =None, =, =, =) </code></pre> <p>This is an augmentation that applies typo error simulation to textual input. The full documentation is available here: <a href="https://nlpaug.readthedocs.io/en/latest/augmenter/char/keyboard.html" target="_blank">https://nlpaug.readthedocs.io/en/latest/augmenter/char/keyboard.html</a></p> <p>Here's the data prep notebook: <a href="https://www.kaggle.com/code/sambitmukherjee/nlpaug-daigt-v2" target="_blank">https://www.kaggle.com/code/sambitmukherjee/nlpaug-daigt-v2</a></p> <p>And here's the submission notebook: <a href="https://www.kaggle.com/code/sambitmukherjee/with-nlpaug-explained-llm-model/notebook" target="_blank">https://www.kaggle.com/code/sambitmukherjee/with-nlpaug-explained-llm-model/notebook</a></p> <hr> <p>Two other things that worked for us in the private LB:</p> <p>A. Using a synonym replacement augmentation (on the AI generated texts of DAIGT V2).</p> <p>Private LB: 0.914</p> <p>Public LB: 0.859</p> <p>Data prep notebook: <a href="https://www.kaggle.com/code/sambitmukherjee/daigt-v2-synonym-replace/notebook" target="_blank">https://www.kaggle.com/code/sambitmukherjee/daigt-v2-synonym-replace/notebook</a></p> <p>Submission notebook: <a href="https://www.kaggle.com/code/sambitmukherjee/synonym-replace-explained-llm-model?scriptVersionId=158511025" target="_blank">https://www.kaggle.com/code/sambitmukherjee/synonym-replace-explained-llm-model?scriptVersionId=158511025</a></p> <p>B. Using the autocorrect library to correct spelling mistakes (in the full DAIGT V2 as well as in the competition test set).</p> <p>Private LB: 0.903</p> <p>Public LB: 0.966</p> <p>Data prep notebook: <a href="https://www.kaggle.com/code/wasimmadha/autocorrect-v2" target="_blank">https://www.kaggle.com/code/wasimmadha/autocorrect-v2</a></p> <p>Submission notebook: <a href="https://www.kaggle.com/code/wasimmadha/llm-daigtext/notebook?scriptVersionId=159219989" target="_blank">https://www.kaggle.com/code/wasimmadha/llm-daigtext/notebook?scriptVersionId=159219989</a></p>
LLM - Detect AI Generated Text
Learning log from the competition 🥈 - draft to be updated
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LLM - Detect AI Generated Text <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Just publishing what we had learnt from the comp , though we didn't select our best but there were a lot to learn from all contributors to great notebooks dataset and discussions . some of our quick findings/things should have done .. 😃</p> <p>to the team .. Thx bogo , <a href="https://www.kaggle.com/treesky" target="_blank">@treesky</a> , <a href="https://www.kaggle.com/alejopaullier" target="_blank">@alejopaullier</a> and Ravi for bearing with me .</p> <h2>1 <strong>Catboost overfitted the least on the tfidf model and linear models were surprisingly good</strong>.</h2> <p>Adding catboost helped the most in the linear model rest we observed sgd and mnb did overfit a lot in some leave one prompt out validations . Though the issue was less public lb score and we should have maybe selected this one which gave around 0.919-0.92 as our best private lb .</p> <p>Catboost low on cv but balanced ensemble<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F768166%2F4263f683bbb69672e69f35607a7b77c3%2FScreenshot%202024-01-22%20at%209.31.56PM.png?generation=1705977171010050&amp;alt=media"> </p> <p><strong>Experiment on validation on ["Exploring Venus" ,"The Face on Mars"] prompt left out for validation</strong></p> <pre><code># MNB # SGD </code></pre> <p><strong>LGB</strong></p> <pre><code># LGB Early stopping iteration ish {': ,': -,': metric': ', ': , ': , ': , ': , ': , ': , ': , ': ,':random_state} # LGB Early Stopping , looks better (' : 3000, ' : 0.7800, ' : 0.8000, ' : 6743, ' : '', ' : '', ' : 0.00581909898961407, </code></pre> <p><strong>CAT</strong> low cv but doesn't overfit like sgd mnb that train very fast , <strong>so thus it was really not individually that good but did balance overfitting from sgd and mnb</strong> </p> <pre><code>. iterations. iterations = ,subsample = .,learning_rate = ., </code></pre> <h2>2 <strong>LOGODDS helped in ensemble</strong>:</h2> <p>we observed also our second best private sub was a log odds ensemble . which think was helpful in ensembling disparate models . due to low public lb unfortunately we didnt select that .<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F768166%2F133fc6d8976b1b4bbf92e329aea6811f%2FScreenshot%202024-01-22%20at%209.21.31PM.png?generation=1705976520756254&amp;alt=media"></p> <h2>3 <strong>distilbert was best in our transformer based model and LLMs didn't help, deberta close second</strong> .</h2> <p>LLMS for us specially mistral was miserable in private public . but distilbert thanks to the public notebook and tuned by us score really well <code>0.887</code> in private LB which we should have added to our ensemble .<br> <a href="https://www.kaggle.com/code/gauravbrills/inference-detectai-distilroberta" target="_blank">https://www.kaggle.com/code/gauravbrills/inference-detectai-distilroberta</a><br> This was trained in feedback data,drcat and several public palm mistral and Gemini datasets</p> <p>Mistral was bad :( <br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F768166%2F992a71f6ccf070f05c69f6a71d3dd635%2FScreenshot%202024-01-22%20at%209.29.52PM.png?generation=1705977084984276&amp;alt=media"></p> <h2>4 <strong>Stacking with pseudo labelling did work but we didn't perceiver</strong></h2> <p>Initially moth tried a technique to rank preds and then pseudo label the top and botton and do a stacking approach . This did give a considerable bump but we did not perceiver on this which proved to be the <code>secret sauce</code> in many top solutions .This was. mainly cause CAT was too slow and hence we could not ensemble more models .</p> <h2>5 <strong>Tried paper approaches few worked few think interesting to try</strong></h2> <p>we tried few paper approaches <br> Things from paper which worked <br> -- cleaner for spaces from <a href="https://github.com/YuchuanTian/AIGC_text_detector/blob/13ab846c6540c37e1eb3091edb2f2b99682a3303/corpus_cleaning_kit.py" target="_blank">https://github.com/YuchuanTian/AIGC_text_detector/blob/13ab846c6540c37e1eb3091edb2f2b99682a3303/corpus_cleaning_kit.py</a><br> -- <strong>PHD intrinsic dimension</strong> seems really interesting could not finish testing that maybe will give a try not sure anyone tried this <a href="https://arxiv.org/pdf/2306.04723.pdf" target="_blank">https://arxiv.org/pdf/2306.04723.pdf</a> </p>
LLM - Detect AI Generated Text
Winning Solution 34th Solution (Possible 22nd Solution) LB Public: 0.954 LB Private: 0.92
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LLM - Detect AI Generated Text <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Yes! At least for this solution, the main focus was balancing the dataset. The three answers I selected each had different approaches, but this basic modification ended up being the best one. I was also surprised how focusing on making a well balanced dataset brings a big change. </p> <p><strong>This project was made as an extension to the ML@Berkeley NMEP (New Member Education Program).</strong><br> <strong>Please check out our organization if interested!</strong><br> <a href="https://ml.berkeley.edu" target="_blank">https://ml.berkeley.edu</a><br> <a href="https://www.linkedin.com/company/machine-learning-at-berkeley/mycompany/" target="_blank">https://www.linkedin.com/company/machine-learning-at-berkeley/mycompany/</a></p> <p>Private: 0.92 (Private Best: 0.929) Public: 0.954</p> <p>Here are some approaches that allowed us to achieve 0.92 for private score, ending up at 34th. A similar approach was used in a different notebook to achieve 0.929 (unfortunately this was not selected for the final submission)</p> <p>Referenced and learned a lot from:<br> <a href="https://www.kaggle.com/code/batprem/llm-daigt-excluded-prompts" target="_blank">TF-IDF Model (from public notebook):</a><br> Source: LLM DAIGT excluded prompts<br> Scores: Private - 0.895, Public - 0.963</p> <ol> <li><strong>Ensemble Random Forest</strong><br> Added a random forest model for the ensemble to increase model diversity (used light weight that is similar to mnb)</li> </ol> <pre><code> ensemble = VotingClassifier(estimators=[(,clf), (, sgd_model), (,lgb), (, cat), (, rf_model) ], weights=weights, voting=, n_jobs=-) </code></pre> <ol> <li><strong>Balance the Distribution and Add Dataset Diversity</strong><br> <a href="https://github.com/panagiotisanagnostou/AI-GA#" target="_blank">https://github.com/panagiotisanagnostou/AI-GA#</a><br> Made a well balanced and diverse training data by adding additional train data. I checked the distribution of labels, and added additional dataset to make a well-balanced training data. Specifically, AI-GA (AI-Generated Abstracts dataset) was used. </li> </ol> <pre><code>merged_df[].hist(bins=) plt.xlabel() plt.ylabel() plt.title() plt.show() </code></pre> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3441149%2Fa3d01f665cddf97021049d3c65a4e91c%2FScreenshot%202024-01-23%20at%203.05.03%20PM.png?generation=1706051944583747&amp;alt=media"></p> <p>Check that the distribution is now balanced after adding data<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3441149%2F755a2da27673bbd2dd11f9a0bcaa3224%2FWell%20Balanced.png?generation=1706052016460544&amp;alt=media"></p> <p>Please feel free to ask any follow up questions.</p>
LLM - Detect AI Generated Text
[121st Solution] You Can Achieve PB 0.929 Only Use TF-IDF
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LLM - Detect AI Generated Text <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Yes, that's slow. I set <code>n_estimators</code> to 1500 and it would take about 3-4 hours on hidden test data (I guess).</p> <p><code>min_df = 2</code> means that the phrase must appear in all documents at least twice. Firstly, this helps to exclude some spelling mistakes. Secondly, it can reduce the dimensions of the sparse matrix to some extent, allowing me to use more training data. <br> I tried setting <code>min_df</code> to <code>3</code>, but the score dropped. I think for simple models, inputting the most useful features yields better results. For more complex models, such as LGBM and MLP, which have inherent feature selection capabilities, inputting all features tends to produce better results.</p> <h3>1. Start</h3> <p>Thanks to <code>Kaggle</code> and <code>THE LEARNING AGENCY LAB</code> for hosting this meaningful competition. <br> In addition, I would like to thank all the <code>Kagglers</code> who shared datasets and innovative ideas. <br> Although it's another drop on the private leaderboard, fortunately, I managed to hold on to the silver medal.</p> <h3>2. Finding</h3> <ul> <li><code>n_grams = (3, 5)</code> worked best for me, I did not try <code>n_grams</code> larger than <code>5</code>.</li> <li><code>min_df = 2</code> can boost scores of <code>SGD</code> and <code>MultinomialNB</code> almost <code>0.02</code>, but would reduce scores of <code>CatBoost</code> and <code>LGBM</code> almost <code>0.01</code>.</li> <li>When I used <code>min_df = 2</code>, I tried up to <code>57k</code> data without encountering an out-of-memory error. However, when I didn't use <code>min_df = 2</code>, I could only train a maximum of <code>45k</code>.</li> <li>For <code>SGD</code> and <code>MultinomialNB</code>, I created a new dataset combined <a href="https://www.kaggle.com/datasets/thedrcat/daigt-v2-train-dataset" target="_blank">DAIGT V2 Train Dataset</a>, <a href="https://www.kaggle.com/datasets/thedrcat/daigt-v4-train-dataset?select=daigt_magic_generations.csv" target="_blank">DAIGT V4 Magic Generations</a>, <a href="https://www.kaggle.com/datasets/asalhi/gemini-pro-llm-daigt" target="_blank">Gemini Pro LLM - DAIGT</a>, I could achieve LB score <code>0.960</code> with only these two models.</li> <li>For <code>CatBoost</code> and <code>LGBM</code>, I still used original <a href="https://www.kaggle.com/datasets/thedrcat/daigt-v2-train-dataset" target="_blank">DAIGT V2 Train Dataset</a>, which could give great results on LB.</li> <li>I tried <code>RandomForest</code> on <a href="https://www.kaggle.com/datasets/thedrcat/daigt-v2-train-dataset" target="_blank">DAIGT V2 Train Dataset</a>, which can achieve LB score <code>0.930</code>. Also, I tried <code>MLP</code> on the same dataset, got LB score <code>0.939</code>.</li> <li>Reduce <code>CatBoost</code> iterations and increase learning rate can achieve better score and decrease a lot of execution time.</li> </ul> <h3>3. Final Code</h3> <p>I divided all the models into two major categories to generate prediction results since these two categories of models used different datasets and parameters. </p> <table> <thead> <tr> <th>Combo 1</th> <th>Weights 1</th> <th>Combo 2</th> <th>Weights 2</th> <th>Final Weights</th> <th>LB</th> <th>PB</th> <th>Chosen</th> </tr> </thead> <tbody> <tr> <td><code>(MultinomialNB, SGD)</code></td> <td><code>[0.5, 0.5]</code></td> <td><code>(LGBM, RandomForest)</code></td> <td><code>[0.5, 0.5]</code></td> <td><code>[0.4, 0.6]</code></td> <td><code>0.970</code></td> <td><code>0.907</code></td> <td><code>Yes</code></td> </tr> <tr> <td><code>(MultinomialNB, SGD)</code></td> <td><code>[0.10, 0.31]</code></td> <td><code>(LGBM, CatBoost)</code></td> <td><code>[0.28, 0.67]</code></td> <td><code>[0.3, 0.7]</code></td> <td><code>0.966</code></td> <td><code>0.908</code></td> <td><code>Yes</code></td> </tr> <tr> <td><code>(MultinomialNB, SGD)</code></td> <td><code>[0.5, 0.5]</code></td> <td><code>(CatBoost, RandomForest)</code></td> <td><code>[20.0, 8.0]</code></td> <td><code>[0.20, 0.80]</code></td> <td><code>0.969</code></td> <td><code>0.929</code></td> <td><code>After Deadline</code></td> </tr> <tr> <td><code>(MultinomialNB, SGD)</code></td> <td><code>[0.5, 0.5]</code></td> <td><code>(CatBoost, RandomForest, MLP)</code></td> <td><code>[4.0, 1.5, 0.3]</code></td> <td><code>[0.20, 0.80]</code></td> <td><code>0.970</code></td> <td><code>0.928</code></td> <td><code>After Deadline</code></td> </tr> </tbody> </table> <p>Notebook Links:</p> <p><a href="https://www.kaggle.com/code/lizhecheng/lb-0-970-pb-0-928-mnb-sgd-cb-rf-mlp/notebook" target="_blank">LB 0.970 PB 0.928 MNB+SGD+CB+RF+MLP</a></p> <p><a href="https://www.kaggle.com/code/lizhecheng/lb-0-969-pb-0-929-mnb-sgd-rf-cb/notebook" target="_blank">LB 0.969 PB 0.929 MNB+SGD+RF+CB</a></p> <p>As a result, although <code>CatBoost</code> score on the LB is relatively low compared to other models, it proves its strong robustness. Therefore, we can discover that giving <code>CatBoost</code> a higher weight can lead to better performance on the PB.</p> <h3>4. Not Work</h3> <ul> <li><p>Set <code>max_df</code> or <code>max_features</code> did not work for me.</p></li> <li><p>I tried to generate new dataset by <code>gpt-3.5-turbo</code>, but could not get a good result on my dataset.</p></li> </ul> <pre><code>model_input = + + human_text </code></pre> <ul> <li>Tried <code>SelectKBest</code> and <code>chi2</code> to reduce the dimension of vectorized sparse matrix, LB score dropped.</li> </ul> <pre><code> = int(num_features / ) = SelectKBest(chi2, k=k) = chi2_selector.fit_transform(X_train, y_train) = chi2_selector.transform(X_test) </code></pre> <ul> <li>Tried <code>TruncatedSVD</code> too. However, since the dimension of original sparse matrix is too large, I could only set the new dimension to a very low number, which caused the LB score dropped a lot. (Setting a large output dimension for reduction can still lead to out-of-memory error because <code>TruncatedSVD</code> is achieved through matrix multiplication, which means that the generated new matrix also occupies memory space).</li> </ul> <pre><code> = int(num_features / ) = TruncatedSVD(n_components=n_components) = svd.fit_transform(X_train) = svd.transform(X_test) </code></pre> <ul> <li>Tried to use features from <a href="https://www.kaggle.com/competitions/linking-writing-processes-to-writing-quality" target="_blank">last competition</a>, such as the ratio of word that length greater than 5, 6, …, 10; the ratio of sentence that length greater than 25, 50, 75; different aggregations of word features, sentence features and paragraph features.</li> </ul> <h3>5. Conclusion</h3> <p>The robustness of large language models is indeed stronger than tree models. <br> Additionally, in this competition, there is a higher requirement for the quality of training data for large language models. I used the publicly available large datasets from the discussions, but I did not achieve very ideal results. Therefore, it is essential to have the machine rewrite human-written articles to increase the model's discrimination difficulty.<br> I gained a lot from this competition and look forward to applying what I've learned in the next one. Team Avengers will keep moving forward.</p> <h3>6. Repo</h3> <p>GitHub: <a href="https://github.com/Lizhecheng02/Kaggle-LLM-Detect_AI_Generated_Text" target="_blank">Here</a></p>
LLM - Detect AI Generated Text
[138 -> 1620th place], sharing my findings...
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LLM - Detect AI Generated Text <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Firstly, huge thanks to the organizers for hosting this competition and everyone who shared datasets, insights through the discussions! It was a very enriching experience!</p> <p><strong>Approach</strong></p> <p>My solution is an ensemble of the <a href="https://www.kaggle.com/code/erikfolkesson/detailed-explanation-custom-tokenizer" target="_blank">custom trained tokenizer tf-idf</a> and an LGBM built on <a href="https://www.kaggle.com/code/alejopaullier/daigt-deberta-text-classification-train" target="_blank">deberta-v3-base</a> predictions along with hand crafted features. The features included =&gt;</p> <ul> <li>text stat based features related to readability</li> <li>average, std of sentence length</li> <li>number of sentences</li> <li>number of unique words</li> <li>number of paragraphs</li> <li>average paragraph lengths</li> <li>number of punctuations</li> <li>parts of speech such as nouns, verb, adverbs etc. counts</li> <li>total number of words</li> <li>number of unique 3 grams</li> <li>number of unique 4 grams</li> </ul> <p><strong>Dataset</strong> = <a href="https://www.kaggle.com/competitions/llm-detect-ai-generated-text/discussion/455517" target="_blank">Daigt-v2 dataset</a></p> <p><strong>What worked</strong></p> <ul> <li>Training tokenizer on test set only</li> <li>Using <a href="https://www.kaggle.com/competitions/llm-detect-ai-generated-text/discussion/468767" target="_blank">Levenstein</a>, spell correction</li> <li>Large vocabulary (~120k)</li> <li>Training tree based models using <a href="https://www.kaggle.com/competitions/llm-detect-ai-generated-text/discussion/467392" target="_blank">minimize logloss metric instead of maximizing roc</a></li> <li><a href="https://www.kaggle.com/competitions/llm-detect-ai-generated-text/discussion/468363" target="_blank">using post process</a></li> </ul> <p><strong>What didn't work</strong></p> <ul> <li>using deberta mean pooled features as input to lgbm</li> <li>Training tokenizer on train+test data</li> <li>Using lightautoml models (usually timed out in submissions)</li> <li>Giving lower weight to tf-idf in ensemble</li> </ul> <p>It's a bit disheartening that my submissions didn't perform as expected and the submissions which weren't selected did much better. Nevertheless, I had a great learning experience and I hope to do better next time😃!</p>
LLM - Detect AI Generated Text
[1st Place Efficiency Prize] Scientific Journal
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LLM - Detect AI Generated Text <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Submission was selected based on gut feeling, despite its low public score. I believe the 'kids-frontier' dataset is useful here, along with other considerations I had when participating in the CommonLit and Feedback-related competitions.</p> <p>My efficiency solution is based on two wonderful notebooks created here</p> <ul> <li><a href="https://www.kaggle.com/code/datafan07/train-your-own-tokenizer" target="_blank">https://www.kaggle.com/code/datafan07/train-your-own-tokenizer</a> by <a href="https://www.kaggle.com/datafan07" target="_blank">@datafan07</a>. </li> <li><a href="https://www.kaggle.com/code/siddhvr/llm-daigt-sub" target="_blank">https://www.kaggle.com/code/siddhvr/llm-daigt-sub</a> by <a href="https://www.kaggle.com/siddhvr" target="_blank">@siddhvr</a> </li> </ul> <p><strong>Quick rundown</strong></p> <ul> <li>A custom Byte-pair encoding tokenizer on the “public + private” test dataset.</li> <li>Train a TFIDFVectorizer on the tokenized test set</li> <li>Train three classifier models “MultinomialNB, SGDClassifier, and LGBMClassifier” on top of the TFIDF vectors and</li> <li>Perform an ensemble using a VotingClassifier of the above-mentioned 3 classifiers.</li> <li>External data: kids-frontier + daigt-v2-train-dataset by <a href="https://www.kaggle.com/thedrcat" target="_blank">@thedrcat</a></li> </ul> <p>My unique solution for this competition is to curate a good dataset for this competition, I have tried very advanced NLP/LLM models, techniques/tricks that u can think of, but none works on public LB. My hypothesis is that this competition needs a dataset of similar distribution e.g. the one shared by the community and especially daigt-v2-train-dataset. My final CPU submission was just a simple baseline based on <a href="https://www.kaggle.com/datafan07" target="_blank">@datafan07</a> <a href="https://www.kaggle.com/code/datafan07/train-your-own-tokenizer">work</a>, and just incorporate my carefully curated data. Adding other data causes severe data drift which further increases my gap between CV/LB.</p> <p>I felt that having a strong educational dataset of a similar distribution is crucial, hence I invest most of my time to find a really good educational dataset especially written by students of various grades with a focus from grade 1 - grade 13. A collective effort has been started from this competition <a href="https://www.kaggle.com/competitions/commonlitreadabilityprize">CommonLit Readability Prize competition</a>. In that competition, as part of the training data, the host uses some excerpts from “kids.frontier.org”, hence I started exploring what kind of dataset these are, to my surprise this is a very good essay dataset that has yet to show its true potential. I did not share these datasets prior to this competition, as I believe this is somewhat a very strong dataset in future NLP competition from The Learning Agency Lab..</p> <p><strong>External data:</strong></p> <ol> <li><a href="https://www.kaggle.com/datasets/thedrcat/daigt-v2-train-dataset/data">daigt-v2-train-dataset</a> by <a href="https://www.kaggle.com/thedrcat" target="_blank">@thedrcat</a> </li> <li><a href="https://www.kaggle.com/datasets/xyzdivergence/kf-dataset">Kids Frontier scientific journal</a> </li> <li><a href="https://www.kaggle.com/datasets/xyzdivergence/kf-data-source">Kids Frontier source</a></li> </ol> <p><strong>What is <a href="https://kids.frontiersin.org/articles/">Kids Frontier</a>?</strong><br> Frontiers for Young Minds" is an open-access scientific journal platform uniquely designed for children. It features articles written by scientists and is reviewed by a broad range of young people before publication. </p> <p>Distinguished scientists are invited to write about their discoveries in a , and it is then up to the kids themselves – with the help of a science mentor – to provide feedback and explain to the authors how to best improve the articles before publication. </p> <p>It covers scientific journals from the following domains 1. Astronomy and Physics, 2. Biodiversity, 3. Chemistry and Materials, 4. Earth Sciences, 5. Engineering and Technology, 6. Human Health, 7. Mathematics and Economics, 8. Neuroscience and Psychology.</p> <p>I have collected around 1k articles/journals raw texts from “kids.frontiersin.org". I have used the final generated article/essays/journal from the following <strong>prompt2</strong> only. The original article/journal is only used as a reference and possess a “Creative Commons Attribution License (CC BY)” </p> <ul> <li><strong>gpt-3.5-turbo-1106 API</strong> is used for generating the final text, using the following two prompts.</li> <li><strong>prompt1</strong>: "Given the following text what are some questions to ask? Generate at least 40 different questions. \nText: {text}"</li> <li><strong>prompt2</strong>: "Given the following article, Summarize the question '{question}' as a {grade_level} grade student in less than 500 words. The summarization can be either complete, incomplete or partially complete. \nArticle: {article}" (generated 60k samples, only 30k used for training)</li> </ul> <p>A sample of a journal article can be found at: <a href="https://kids.frontiersin.org/articles/10.3389/frym.2023.1215124" target="_blank">https://kids.frontiersin.org/articles/10.3389/frym.2023.1215124</a></p> <p><b>Public LB</b>: 0.947061, <b>Private LB</b>: 0.91967, <b>CPU Inference time</b>: 17mins</p> <p><strong>References</strong></p> <ul> <li><a href="https://www.kaggle.com/code/datafan07/train-your-own-tokenizer" target="_blank">https://www.kaggle.com/code/datafan07/train-your-own-tokenizer</a> </li> <li><a href="https://www.kaggle.com/datasets/thedrcat/daigt-v2-train-dataset" target="_blank">https://www.kaggle.com/datasets/thedrcat/daigt-v2-train-dataset</a></li> <li><a href="https://www.kaggle.com/code/siddhvr/llm-daigt-sub" target="_blank">https://www.kaggle.com/code/siddhvr/llm-daigt-sub</a></li> <li><a href="https://kids.frontiersin.org/" target="_blank">https://kids.frontiersin.org/</a></li> <li><a href="https://www.frontiersin.org/news/2017/07/05/frontiers-for-young-minds-using-frontiers-for-young-minds-articles-in-your-classroom/" target="_blank">https://www.frontiersin.org/news/2017/07/05/frontiers-for-young-minds-using-frontiers-for-young-minds-articles-in-your-classroom/</a></li> </ul> <p><strong>Code</strong>: <a href="https://www.kaggle.com/code/xyzdivergence/llm-daigt-sub/notebook?scriptVersionId=153307051" target="_blank">https://www.kaggle.com/code/xyzdivergence/llm-daigt-sub/notebook?scriptVersionId=153307051</a></p>
LLM - Detect AI Generated Text
[1st Public/9th Private] LLMLab - Solution Summary
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LLM - Detect AI Generated Text <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to the Kaggle and the Learning Agency Lab for hosting this challenge. We really enjoyed this and learned a lot over the past 3 months. Thanks also to all who contributed with ideas and datasets. </p> <p>Our solution is composed of two main parts, a TFIDF based pipeline using several classifiers sitting on top of the TFIDF features, and a BERT based pipeline based on classifying human and generated texts. </p> <p>We believe the novel components to our solution include:</p> <ol> <li>Carefully curated datasets with diversity in prompts and models varying in size from ~200K samples to ~700K samples.</li> <li>Statistical based and reverse engineered deobfuscation pipeline. (+0.006 public/-0.001 private)</li> <li>Cluster based post processing (~0.001)</li> </ol> <p>The TFIDF pipeline is similar to the ones used in many public notebooks with the notable exception that we used a reverse engineered based approach to correct systematic spelling mistakes. A separate notebook will be published on how we handled deobfuscation. Our TFIDF also had a post processing element where we noted that pairs of texts of a high enough similarity are almost always both LLM generated.</p> <p>Our BERT based pipeline consists of 2 deberta_v3_large and 1 roberta model. Each was trained on a different but overlapping variant of highly diverse datasets. The datasets were generated from a large number of open source LLM’s of varying sizes between 7-70B parameters, as well as commercial LLM offerings, and a wide variation of temperature and top_p. Generated texts were derived from essay generation, paraphrasing and text completion prompts. Human texts were curated from student essays and open source web text. 250k-700k total samples were in each dataset version. The best single BERT based classifier achieved 0.96 public / 0.915 private leaderboard. </p> <p>The 0.99 was unexpected; we blended tfidf models with point 2&amp;3 above, along with deberta with point 1 above… and magic happened. We had not seen blending working so well before. Unfortunately the 0.99 was not on private also 😀 but the fall was not too bad when looking over the leaderboard. <br> Note our best selected private sub, used point 1 data (not the public script data), but scored worse on public. Just as well we did that. </p> <p>Thank you <a href="https://www.kaggle.com/olegshpagin" target="_blank">@olegshpagin</a> … currently, I dont think we will unfortunately - there is too much work going on in there. Especially LLM data generation - there are a lot of scripts to tie together. <br> But I think a lot of the elements in our solution will be seen in some of the top 7, which will be shared. </p> <p>With batchsize 16, and accum 2 <code>microsoft/deberta-v3-large</code> was about 16GB memory. When we say BERT, it means bert based models like deberta, roberta etc. These are generally better performing at such tasks than xlnet. The dataset was a mix of public sources, shared kaggle datasets, the persuade corpus and our own generated LLMs.</p> <p>Good question <a href="https://www.kaggle.com/riteshbhalerao" target="_blank">@riteshbhalerao</a> I think the reasons are reducing variance and, then also the models can look at the problems from different angles. <br> With DL, the models are stochastic so in general it is good to average different seeds of a training run if runtime is not a major factor. <br> Then different models can work well because of approaching the problem differently - roberta was trained with MLM and has absolute positional embeddings; whereas deberta has relative positional embeddings and trained with RTD. there are a few other differences in the model; which can lead to each model being better at different insights. <br> Changing datasets is again the same - a model trained on lots of non essay data may generalise better, while a model more fcoused on essays may be better for essay specific problems but not generalise as well. <br> But there you could argue with enough time and compute we could just train one very large model to learns all the data very well. We plan to test mistral-7b, like first place in the competition did to see how it performs on our dataset. </p> <p>Our weights of the transformers models were,</p> <ul> <li>20% … deberta-v3-large trained on a dataset of 160K/110K LLM/Human which we build up over time testing on CV and LB. </li> <li>10% … roberta-large trained on the same 160K/110K LLM/Human. </li> <li>10% … deberta-v3-large trained on that plus a much larger dataset (~500K samples) built off slimpajama using continuation prompting and a number of different sized LLMs</li> </ul> <p>The second and third models scored lower on LB, but blended well as they brought good diversity. The rest of the weighting was on the TFIDF based models. </p>
LLM - Detect AI Generated Text
[21th Solution] Secret Sauce [0.986 Public - Selected Private: 0.932 Best:0.957]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LLM - Detect AI Generated Text <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks <br> Well, I used pseudo-labeling twice before in other competitions (outside Kaggle) and was the "checkmate" move back then. So I tried it here.<br> And I know the secret in the test so I focused on the test :-)</p> <p>The prompt prediction was a minor issue, I didn't care much about accuracy as much as I cared to know what N prompts are in control of the test data. (As insurance plan) , I didn't want surprises in private but I was surprised in the end by wrong submission selections :/ </p> <p>Wow, I must skipped this comment! <br> This was done after ensembling the linear model with distilroberta, when I saw the jump from 0.975 to 0.982 I thought I was dreaming 😅</p> <p>Thank you <strong>ALL</strong> for such a great time and competition :-) </p> <p><strong>LB Solution: - The Secret Sauce</strong></p> <p><strong>Phase 1: Selecting Datasets.</strong></p> <p>I used the following datasets : </p> <ul> <li>DAIGT V2 Train Dataset by <a href="https://www.kaggle.com/thedrcat" target="_blank">@thedrcat</a></li> <li>LLM: Mistral-7B Instruct texts by <a href="https://www.kaggle.com/carlmcbrideellis" target="_blank">@carlmcbrideellis</a></li> <li>LLM Extra dataset (Private - generated by me), which I generated from Mistral-7B using the same way that <a href="https://www.kaggle.com/carlmcbrideellis" target="_blank">@carlmcbrideellis</a> did.</li> <li>Gemini Pro LLM - DAIGT by me (Public)</li> <li>Competition dataset.</li> </ul> <p><strong>Phase 2: Dataset preparation.</strong><br> All 15 prompts were kept (at this phase).<br> The datasets were merged into 3 datasets (Train dataset - all data), (LLM-data only the Mistral data with the Gemini data - all data here with label 1), the original dataset (which 99% human with label 0), named it train_old (Don’t know why !) </p> <p><strong>Phase 3: Training a LogisticRegression for prompt_name classification.</strong><br> A simple and quick phase but needed as insurance, why? <br> I trained the model to predict each test text prompt_name.<br> I counted the unique number of prompt_ids ( I know it's 5 but wanted to make it a more general solution).<br> I selected Top N repeated prmpt_names (where N is the unique number of prompt_ids ) from the 9000+ test set.<br> In this way, I have for sure the prompt_names in the test set (This can be applied on different datasets also) </p> <p><strong>Phase4: Selecting the Training DataSets</strong><br> Now after finding out the prompts used, I selected their crosspondig texts from the Train dataset only. <br> I didn’t do this to the LLM-Data set or the old_train (I wanted to keep more general data for less overfitting) </p> <p><strong>Phase5: Spelling Phase</strong><br> Added later based on Levenshtein distance for error correction - based on <a href="https://www.kaggle.com/piotrkoz" target="_blank">@piotrkoz</a> discussion thread and <a href="https://www.kaggle.com/wickkey2" target="_blank">@wickkey2</a> posted method ( which saved me some time writing a similar one) </p> <p>The error correction was done on All train datasets and test dataset.</p> <p><strong>Phase6: The magic of Byte Pair Encoding thanks to <a href="https://www.kaggle.com/datafan07" target="_blank">@datafan07</a> golden approach.</strong><br> I used vocab size of only 5000 which was enough. <br> I created 3 tokenized datasets, Train , test and test+llm_data which is used for fitting.</p> <p><strong>Phase 7: Tf-idf</strong> with 3 to 6/7 ngrams range and min_df = 2 (which showed best results) <br> <strong>Phase 8: MaxAbsScaler + Ridge/LinearSVR is all you need.</strong></p> <p>Until here the public score was around 0.976/7 (with spell checking) and 0.972/3 without and as far as I can tell between 0.894 and 0.905 private</p> <p>I knew that this was not enough and most likely I would be taking a hit by private LB so I went to digg more. </p> <p><strong>Phase 9: I trained different transformers models</strong> with stemmed words, POS, Lemma, and mixup sentences … I tried a lot but nothing worked well until <a href="https://www.kaggle.com/mustafakeser4" target="_blank">@mustafakeser4</a> shared the results over distilroberta.<br> So I retried with distilroberta I was able to reach 0.915. But in the end, I used the 0.913 version of his work (the 0.927 didn’t show higher results on LB and I selected the weaker model) I didn’t choose mine because it should a slightly lower ensembling results. </p> <p><strong>Phase 10 Ensembling:</strong> I ensemble distilroberta results with my Ridge model results and pushed the score from 0.972 to 0.975 on the public leader board.</p> <p><strong>Phase 11 : Secret Sauce and the reason behind my 0.932 and 0.957 private scores.</strong></p> <p>Since the test data is somehow manipulated and it's hard to come up with one similar enough. I thought why not train with test! <br> I ranked the result for my top ensemble notebook (0.975 back then) and selected X top rows as AI generated (high scores) and bottom Y as human (low scores) (first exp X and Y was 1000) and added them to training phase and rescored, this pushed the score from 0.975 to 0.982.<br> I repeated the process 4 times (last exp I repeated it 5 times) each time I increased the X and Y selection by 200/250 rows and I was able to score 0.983 and private 0.922</p> <p><strong>Phase 12: More Sauce:</strong><br> -This time I selected the median 50 rows from test (rows with scores around the median)</p> <ul> <li>I ran similarity with train data to find top X (X was set to 10) rows from the train data to those test rows.</li> <li>I retrained the Ridge model and updates the 50 rows scores.</li> <li>This puhsed public LB to 0.984 and private to 0.929</li> </ul> <p><strong>Phase 13: I thought that this is not enough and that I need more transformers:</strong><br> The data selected from Phase 10 (The Top X and Bottom Y) was used as validation data to distilroberta ( I needed something fast - I am training while submitting !) <br> I ran it for a couple of epochs (the best was 4.0) and the best scoring model was sselected. <br> I generated new scores which I ensemble by the final results of Phase12 and Phase11, this gave LB score of 0.982 and a private score of 0.957 (without phase 11) and 0.953 with phase 11.</p> <p>That sumurize my LB solution. </p> <p>Best Private Score Notebook ( need cleaning) : <a href="https://www.kaggle.com/asalhi/modelsxrunv1deeplearning" target="_blank">https://www.kaggle.com/asalhi/modelsxrunv1deeplearning</a><br> Chosen Notebook : <a href="https://www.kaggle.com/code/asalhi/secretsauceartridge" target="_blank">https://www.kaggle.com/code/asalhi/secretsauceartridge</a></p> <p>I didn't choose my best work :/ this is very frustrating but that's a mistake I have to accept and move on.</p> <p>I am not sure about the efficiency prize score yet! I am not putting hopes :/</p> <p>I used sigmoid to make all predictions on the same ground </p> <pre><code> (): / ( + np.exp(-x)) </code></pre> <p>Yes in our case since we will be scoring on the same test data it is f fine, even on different datasets the process is quite fast, my best notebook which scores (0.957 private and 0.984 public) takes about 1.5 hours to predict, relearn and re-evaluate so it also should be fine as a pipeline (to run pseudo labeling every few days in production on an old test or already seen test data) </p>
LLM - Detect AI Generated Text
[3rd Place Efficiency Prize] Simple Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LLM - Detect AI Generated Text <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First, I want to thank Kaggle and THE LEARNING AGENCY LAB for holding such an interesting competition. I also want to thank <a href="https://www.kaggle.com/thedrcat" target="_blank">@thedrcat</a> for providing such a wonderful <a href="https://www.kaggle.com/datasets/thedrcat/daigt-v2-train-dataset" target="_blank">training dataset</a>.</p> <p>I joined this competition in the late December. At that time the leaderboard score is already very high. I made a decision to go for the efficiency prize instead. Later on I found teamates in the discussion.</p> <p>The baseline model we chose is <a href="https://www.kaggle.com/code/hubert101/0-960-phrases-are-keys/notebook?scriptVersionId=153589869" target="_blank">the well-known 0.960 notebook</a> which improves the performance of tfidf based on the idea from <a href="https://www.kaggle.com/datafan07" target="_blank">@datafan07</a>. The runtime of the baseline model is more than 2 hours. There are a lot of spaces to opitmize. The optimization consists of three phases.</p> <h1><strong>Runtime optimization</strong></h1> <ol> <li>Remove super weak estimator with long excution time. After the randomization of the weights of the ensemble models, it is clear to find out the contribution of lgb is so little. Remove lgb caused roughly 0.001 performance degradation in the public LB but save 1 hour runtime.</li> <li>Reduce the number of features. The 3,5 ngram sparse matrix contains roughly 4million features while 3,4 contains roughly half of them. When changing 3,5 to 3,4, the performance decreased roughly 0.003 but save half the runtime.</li> <li>Reduce the training iterations of catboost. Other linear regression models are almost instantly train and inference, reduce the training iterations from 1000 to 100 then to 10 can save most of the time while the performance stays almost the same.</li> <li>Remove another weak estimator MNB. Remove MNB could get 0.005 improvement in the local CV and 0.003 in the public leaderboard.</li> <li>Use bayes optimization to search for a new parameters for catboost for 10 iters.</li> </ol> <p><code>sgd = SGDClassifier(max_iter=8000, tol=1e-4, loss="modified_huber",random_state=2000)</code><br> <code>cat = CatBoostClassifier(iterations=10,</code><br> <code>verbose=0,</code><br> <code>depth=4,</code><br> <code>l2_leaf_reg=7.34674002393291,</code><br> <code>learning_rate=0.5996646210492591,</code><br> <code>allow_const_label=True,loss_function = 'CrossEntropy')</code></p> <p>With all optimization mentioned above, the runtime can be reduced to 8 mins and the AUC is above 0.956 in the public leaderboard.</p> <h1><strong>Another ensembling</strong></h1> <p>Thanks for the sharing from <a href="https://www.kaggle.com/aerdem4" target="_blank">@aerdem4</a> with a brilliant idea to do the <a href="https://www.kaggle.com/code/aerdem4/daigt-superfast-unsupervised-baseline" target="_blank">unsupervised clustering</a>, the public score can be improved roughly 0.0015 while the runtime only increases 60 seconds. However this will cause a performance degradation in the private score. In my opinion, the performance degradation is caused by somehow the imbalance of the hidden test data.</p> <h1><strong>Clean code</strong></h1> <p>As mentioned above, the runtime is very low. After cleaning the code and removing unnecessary print information. This saves roughly 20 seconds runtime.</p> <h1><strong>Code</strong></h1> <p><a href="https://www.kaggle.com/code/chg0901/v69-957-0-955-phrases-are-keys" target="_blank">https://www.kaggle.com/code/chg0901/v69-957-0-955-phrases-are-keys</a></p> <p>The intuition is indeed to minimize the features while maximizing the performance. Ngram 1,3 is not good enough when we tested. ‘writing style’ should be context dependent. Maybe unigram and bi-grams would work if the topics of these essays were more specific.</p>
LLM - Detect AI Generated Text
[4th Place Solution] A Summary of Combined Arms Approach
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LLM - Detect AI Generated Text <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Firstly, I would like to thank kaggle, the hosts, and everyone who participated in this enjoyable competition. I learned a great deal along the way, and it was quite refreshing during this emerging era of large language models. </p> <p>I would also like to extend my gratitude to my teammate, <a href="https://www.kaggle.com/chasembowers" target="_blank">@chasembowers</a>. Although we merged in the last day of team merging, the integration process was quite smooth and productive. We were able to quickly sync our approaches, utilizing the strong points of both pipelines together. </p> <p>I am also sorry for those who worked hard during the competition but were affected by the final shakeup. I wish you better luck next time!</p> <p><strong>Overview</strong></p> <p>Since it was really hard to implement a reliable CV scheme —although we had separate CV's for each piece— we wanted to have a generalized model as possible. Developing a generalized model in uncharted waters presented its own challenges, as we were uncertain which approaches would be successful. With our custom approach, we could have a resilient final model where each piece is strong in some aspect of the data and could help each other where another one fails. You can refer to this as a simple ensemble, somewhat of a mixture of experts, or a combined arms approach if you prefer.</p> <p>So I'd like to explain each important piece individually:</p> <p><strong>Classical ML Approach:</strong></p> <p>This is a section where many of you are already familiar, as it is similar to my public notebook with only minor parameter adjustments, such as:</p> <ul> <li>Wider ngram range (3,7)</li> <li>Limited feature space, such as max_df and max_features…</li> <li>Extra tokenizer preprocessor steps like normalizers etc.</li> <li>The use of the DAIGT V2 training dataset, for which I am grateful to <a href="https://www.kaggle.com/thedrcat" target="_blank">@thedrcat</a>, with some post-processing.<ul> <li>Instead of fixing/removing typos, we embraced them. But they were causing some degree of overfitting between human and LLM-generated essays, so we decided to replicate artificial typos. </li> <li>At first, we believed these obfuscations followed certain patterns, such as keyboard typos where a typo character is typically near its neighbors. However, after further analysis, we discovered that character-level obfuscations were completely random. So we chose that approach as well and inserted/replaced artificial typos randomly to 30% of the generated data.</li></ul></li> <li>This piece itself was an sub-ensemble of different linear models and gradient boosting models. </li> </ul> <p>In the end, the solution was not significantly different from what I had shared a few weeks prior. It was a flagship piece and alone it was a public lb gold zone solution for a long time. We utilized this piece in other pieces too as I'll explain in next section. However it was a risky pipeline alone in terms of overfitting. It needed more generalized models around it to mitigate this effect…</p> <p><strong>LLM Approach:</strong></p> <p>I will briefly discuss this aspect, as my teammate <a href="https://www.kaggle.com/chasembowers" target="_blank">@chasembowers</a> explained the details in depth <a href="https://www.kaggle.com/competitions/llm-detect-ai-generated-text/discussion/471934" target="_blank">here</a>. Essentially, we developed mistral-7b based features (think of token probabilities, features over logprobs, perplexity etc.), to train a discriminator and make predictions with it.</p> <p>But we took it further by pseudo-labeling the test set using the previous TF-IDF approach and retraining a bunch of ML models based on these labels using the mistral features, such as linear and gradient boosting models. This helped us find a good balance between the LLM's generalization capabilities and TF-IDF's token based, task-specific strong indicators.</p> <p><strong>Transformer-1: Longformer</strong> </p> <p>In this complimentary experimental piece, we trained a longformer model, but instead of predicting binary outcome, we set the training set source as the target. Then we merged these predictions with previous predictions to create a feature space and trained another set of models over them.</p> <p><strong>Transformer-2: Deberta Large:</strong></p> <p>In this final piece, we trained a DebertaV3 model using a vast merge of publicly available data, amounting to around 700k instances. This model served as our regularizer and a fallback option for instances when TF-IDF based models were uncertain, such as cases involving data drift in private data. It also assisted in softening overly confident predictions.</p> <p>We trained this model for 1 epoch (it was already achieving 0.999 CV early on) with 1024 sequence length, without early stopping and with minimal or no learnable parameters in the first layers of the model.</p> <p><strong>Final Ensemble:</strong></p> <p>In final part of ensemble each model had it's own weight, but most importantly we didn't gave weights over their public lb scores since it was too risky, we used common sense and balanced approach while manually setting the weights since they all had perfect CV scores.</p> <p>Here's the training(most of it) and inference <a href="https://www.kaggle.com/code/chasembowers/4th-place-solution/notebook" target="_blank">notebook</a>. [It should be public pretty soon.]</p> <p><strong>Final Thoughts:</strong></p> <p>I truly enjoyed this competition, particularly the collaborative spirit of sharing ideas, sources, and working together. Surviving this intense shakeup is truly rewarding, as is sharing our findings in discussions and notebooks.</p> <p>Throughout the competition, as a team and individually, we explored a vast number of ideas, and while many did not work out, the final week was particularly exhausting as we wrapped up and finalized our work.</p> <p>I've tried to keep this post simple, and I apologize if I missed any crucial details. Please feel free to ask any questions you may have in this thread, and I will be more than happy to provide answers when I have time.</p> <p>Just classical fine tune using cls token and many frozen encoder layers. 4 batch size, no quantization (lora didn't go well too btw), 30'ish hours on consumer level gpu. </p> <p>Thanks <a href="https://www.kaggle.com/sheriytm" target="_blank">@sheriytm</a>. As I mentioned before, there wasn't much in the way of mathematical optimization. However, based on prior knowledge, we could say that some models were more prone to overfitting or exhibited higher variance. Additionally, we can make assumptions based on their pretraining data and fine-tune relevant parameters. Moreover, we can analyze the probability distribution of the training and validation data, as well as the examples affected by data drift (like testing on a manually entered out of context data etc).</p> <p>Thanks Murat, it felt really rewarding at the end but was a tough process</p> <p>Thanks <a href="https://www.kaggle.com/crsuthikshnkumar" target="_blank">@crsuthikshnkumar</a> </p>
LLM - Detect AI Generated Text
[5th Place Solution in Efficiency Prize] 327th Place in Private LB
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LLM - Detect AI Generated Text <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks for your advice. I recieved email!</p> <p>I understoood. And today I recieved email from Kaggle.<br> Thanks a lot.</p> <p>I would like to thank Kaggle and THE LEARNING AGENCY LAB for hosting such an exciting competition. <br> I really happy to be 5th place in efficiency prize.<br> I'm explain about my solution. </p> <p><strong>Solution Summary:</strong></p> <p><strong>Baseline</strong><br> I used the following public notebooks.<br> <a href="https://www.kaggle.com/code/hubert101/0-960-phrases-are-keys" target="_blank">https://www.kaggle.com/code/hubert101/0-960-phrases-are-keys</a><br> Thanks to <a href="https://www.kaggle.com/hubert101" target="_blank">@hubert101</a> and everyone who shared their notebooks!</p> <p><strong>Datasets</strong><br> The following datasets were used.<br> <a href="https://www.kaggle.com/datasets/thedrcat/daigt-v2-train-dataset" target="_blank">https://www.kaggle.com/datasets/thedrcat/daigt-v2-train-dataset</a><br> Thanks to <a href="https://www.kaggle.com/thedrcat" target="_blank">@thedrcat</a> !<br> Since this data set contained multiple sources, we selected only those sources that pushed LB higher and used only 10 sources. Three of the sources used only specific prompts.</p> <p><strong>Encoding</strong><br> I used the Byte-Pair Encoding tokenizer without any changes from my notebook.<br> However, I applied the following code to speed up the process. I doubt if it was effective.<br> <code>num_batches = (len(dataset) + batch_size - 1) // batch_size for batch_index in range(num_batches): start_index = batch_index * batch_size end_index = min((batch_index + 1) * batch_size, len(dataset)) yield dataset[start_index:end_index]["text"]</code></p> <p><strong>Model</strong><br> The model was ensembled using.<br> 3 types of ComplementNB.<br> 3 types of SGDClassifier.<br> One LGBMClassifier.<br> SGD used class_weight and LGBM reduced n_iter to 300 to reduce inference time.</p> <p>I am really happy because I did not expect it.</p> <p>By the way, please let me know if you know anything about this.<br> My team name is in 5th place on Efficiency LB, but I haven't heard anything from Kggle staff, should I just wait for an email?<br> Do I need to contact Kggle staff in any way?<br> This is my first prize and I am nervous every day that it might be a mistake!</p> <p>Thank you!</p> <p><strong>Code</strong><br> <a href="https://www.kaggle.com/code/ootake/efficiency-lb-5th-place-solution" target="_blank">https://www.kaggle.com/code/ootake/efficiency-lb-5th-place-solution</a></p> <p>Thank you for pointing that out.<br> I have obtained the Kaggle template, so I am working hard on creating the slides.<br> It's my first time, so I'm feeling a bit confused.😅</p>
LLM - Detect AI Generated Text
[7th Place Solution in Efficiency Prize]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LLM - Detect AI Generated Text <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I want to express my gratitude to Kaggle and THE LEARNING AGENCY LAB for organizing this thrilling competition. The insightful posts in the discussion forum have greatly benefited me as well. I'm also happy to share my solution.<br><br> <br></p> <p>Business context: https://www.kaggle.com/competitions/llm-detect-ai-generated-text<br><br> Data context: https://www.kaggle.com/competitions/llm-detect-ai-generated-text/data<br></p> <h2>Overview of the Approach</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F17316913%2F56e1595dcaa2c42f5908e280e7e2594c%2F_20240223230955.png?generation=1708792266125478&amp;alt=media"><br> Fig. 1 The pipeline of our solution<br><br> <br><br> <br></p> <p>Our solution includes six parts based on <a href="https://www.kaggle.com/code/batprem/llm-daigt-excluded-prompts?scriptVersionId=158926419" target="_blank">public work</a><br></p> <p><strong>Data argument.</strong> The mainstream approach in competition is to seek or generate diverse datasets of student writing and large language model-generated data for data augmentation. Many Kagglers have generously shared their own datasets. Our approach also involves searching for external open-source data, and we have utilized the following external datasets:<br><br> <a href="https://www.kaggle.com/datasets/thedrcat/daigt-v2-train-dataset" target="_blank">https://www.kaggle.com/datasets/thedrcat/daigt-v2-train-dataset</a> <br><br> <a href="https://www.kaggle.com/datasets/alejopaullier/argugpt" target="_blank">https://www.kaggle.com/datasets/alejopaullier/argugpt</a> <br><br> <a href="https://www.kaggle.com/datasets/kagglemini/train-00000-of-00001-f9daec1515e5c4b9" target="_blank">https://www.kaggle.com/datasets/kagglemini/train-00000-of-00001-f9daec1515e5c4b9</a> <br><br> <a href="https://www.kaggle.com/datasets/pbwic036/commonlit-data" target="_blank">https://www.kaggle.com/datasets/pbwic036/commonlit-data</a> <br><br> <a href="https://www.kaggle.com/datasets/wcqyfly/argu-train" target="_blank">https://www.kaggle.com/datasets/wcqyfly/argu-train</a> <br><br> <br></p> <p><strong>Removing duplicate text and data filtering.</strong> This part is the same as the <a href="https://www.kaggle.com/code/batprem/llm-daigt-excluded-prompts?scriptVersionId=158926419" target="_blank">public work</a>.<br><br> <br></p> <p><strong>Tokenization using Byte-pair Encoding.</strong> This part is the same as the <a href="https://www.kaggle.com/code/batprem/llm-daigt-excluded-prompts?scriptVersionId=158926419" target="_blank">public work</a>.<br><br> <br></p> <p><strong>Generate TF-IDF vectors for model training.</strong> We adjusting parameters such as min_df, max_df, and max_features to effectively reduce the feature dimensionality and minimize the impact of noisy data.<br><br> <br></p> <p><strong>Train the MultinomialNB and stochastic gradient descent.</strong> We adjusted the parameter alpha which means the additive smoothing for MultinomialNB and the train steps for SGD. <br><br> <br></p> <p><strong>Ensemble MNB and SGD.</strong> We have attempted to combine Multinomial Naive Bayes (MNB) and Stochastic Gradient Descent (SGD) using different weights. <br></p> <h2>Run time optimization operation</h2> <p><strong>To combine the dataset <a href="https://www.kaggle.com/wcqyfly/notebook95c85fa3c6" target="_blank">in other notebook</a>.</strong><br><br> <br></p> <p><strong>Reduced feature size as mentioned above.</strong> We adjusting parameters such as min_df, max_df, and max_features.<br><br> <br></p> <p><strong>Remove time-consuming models.</strong> We found that after using lightgbm and catboost, not only does it take more time, but the LB score also decreases. We decided to remove these two models. However, the fact proves that this choice is wrong, which makes the PB score very low. But fortunately, we have the opportunity to obtain Efficiency Price.<br></p> <h2>Details of the submission</h2> <table> <thead> <tr> <th>Version</th> <th>Public Score</th> <th>Private Score</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>0.962507</td> <td>0.861747</td> </tr> <tr> <td>2</td> <td>0.962465</td> <td>0.853448</td> </tr> <tr> <td>3</td> <td>0.967997</td> <td>0.891007</td> </tr> <tr> <td>4</td> <td>0.967267</td> <td>0.898650</td> </tr> <tr> <td>5</td> <td>0.967885</td> <td>0.890983</td> </tr> </tbody> </table> <h2>Which did not work</h2> <p>We adjusted the N_grams for TF-IDF, it did not work on both public score and private source.<br></p> <h2>Our code</h2> <p><strong>Our code can be access by kaggle or github. The links are as follows:</strong><br></p> <p><strong>1.kaggle:</strong> <a href="https://www.kaggle.com/code/wcqyfly/fork-of-fork-of-fork-of-llm-daigt-analyse-e-db6333" target="_blank">https://www.kaggle.com/code/wcqyfly/fork-of-fork-of-fork-of-llm-daigt-analyse-e-db6333</a> <br><br> <strong>note:</strong> It should be noted that because the number of test sets is less than 3, running all directly will cause the code to report an error, but after submitting, when the test set is replaced with a hidden test set, the code will be run correctly and get the result.<br></p> <p><strong>2.github:</strong> <a href="https://github.com/wcqy001028/LLM-Detect-AI-Generated-Text" target="_blank">https://github.com/wcqy001028/LLM-Detect-AI-Generated-Text</a><br> <strong>note:</strong> If the number of data in the test_essays.csv is less than 5, the min_df is set to 1 and the model is not trained which only used for debugging. Conversely, when the number of data in test_essays.csv is greater than 5, the min_df is set to 2 and the model will be trained and will generate prediction results. <br></p> <h2>Reference</h2> <ol> <li><a href="https://www.kaggle.com/code/batprem/llm-daigt-excluded-prompts?scriptVersionId=158926419" target="_blank">https://www.kaggle.com/code/batprem/llm-daigt-excluded-prompts?scriptVersionId=158926419</a></li> <li><a href="https://www.kaggle.com/datasets/thedrcat/daigt-v2-train-dataset" target="_blank">https://www.kaggle.com/datasets/thedrcat/daigt-v2-train-dataset</a></li> <li><a href="https://www.kaggle.com/datasets/alejopaullier/argugpt" target="_blank">https://www.kaggle.com/datasets/alejopaullier/argugpt</a></li> <li><a href="https://www.kaggle.com/datasets/kagglemini/train-00000-of-00001-f9daec1515e5c4b9" target="_blank">https://www.kaggle.com/datasets/kagglemini/train-00000-of-00001-f9daec1515e5c4b9</a></li> <li><a href="https://www.kaggle.com/datasets/pbwic036/commonlit-data" target="_blank">https://www.kaggle.com/datasets/pbwic036/commonlit-data</a></li> <li><a href="https://www.kaggle.com/datasets/wcqyfly/argu-train" target="_blank">https://www.kaggle.com/datasets/wcqyfly/argu-train</a></li> <li><a href="https://www.kaggle.com/competitions/llm-detect-ai-generated-text/discussion/468908" target="_blank">https://www.kaggle.com/competitions/llm-detect-ai-generated-text/discussion/468908</a></li> <li><a href="https://www.kaggle.com/competitions/llm-detect-ai-generated-text/discussion/455711" target="_blank">https://www.kaggle.com/competitions/llm-detect-ai-generated-text/discussion/455711</a></li> </ol>
LLM - Detect AI Generated Text
[7th Place Solution] Generate Data with Non-Instruction-Tuned Models
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LLM - Detect AI Generated Text <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2><strong>Forewords</strong></h2> <p>Firstly, a big thank you to Kaggle, the Learning Agency Lab, and Vanderbilt University for orchestrating this interesting competition. Hats off to you all! 🎩 This marks my inaugural Kaggle competition, and the learning curve has been quite the journey.<br> Early in the competition, I had reservations about how well the TF-IDF approach might translate to real-world scenarios. This skepticism led me to delve into LM-based methods and resist the urge to switch back. Little did I anticipate the leaderboard shake-up would be this significant and the outcome has genuinely taken me by surprise.</p> <h2><strong>Final Solution</strong></h2> <p>My "magic sauce" is to use only non-instruction-tuned models to generate data. If we look at the 3 generated essays provided in this competition, they seem increasingly human-like. More importantly, none of them has the generic ChatGPT answer format. I believe this was a big hint by the host. <br> After reading <a href="https://www.kaggle.com/competitions/llm-detect-ai-generated-text/discussion/465882" target="_blank">James' post</a> and <a href="https://www.lesswrong.com/posts/t9svvNPNmFf5Qa3TA/mysteries-of-mode-collapse-due-to-rlhf#The_one_answer_is_that_there_is_no_one_answer" target="_blank">this excellent article</a> (tl;dr instruction tuning has an adverse effect on response diversity), I generated around 400k non-essay texts from slimpajama and 25k essays with persuade 2.0. Models used are vanilla Falcon-7B, Mistral-7B, and Llama2-7B with temperatures ranging from 0.8 to 1.2, top p values from 0.8 to 0.98, and frequency penalties from 0 to 0.3.<br> Then I simply finetuned a single DeBERTa-v3-large model with this dataset at 512 context length and inference at 1024 length. One training run takes around 8 hours on a 3090. I tried training at 1024 length, but the results were worse. I couldn't find much information on why this is the case, so insights on this matter would be appreciated.<br> In my final solution, predictions falling within the 40th to 60th percentile were replaced with outcomes from a basic TF-IDF + SGD model. This tweak resulted in a modest increase in my CV, but had minimal impact on LB.</p> <table> <thead> <tr> <th>Model</th> <th>Dataset</th> <th>Public LB</th> <th>Private LB</th> </tr> </thead> <tbody> <tr> <td>DeBERTa-v3-base-512</td> <td>only persuade</td> <td>0.869</td> <td>0.875</td> </tr> <tr> <td>DeBERTa-v3-base-512</td> <td>persuade &amp; slimpajama</td> <td>0.921</td> <td>0.920</td> </tr> <tr> <td>DeBERTa-v3-base-1024</td> <td>persuade &amp; slimpajama</td> <td>0.910</td> <td>0.901</td> </tr> <tr> <td>DeBERTa-v3-large-512</td> <td>persuade &amp; slimpajama</td> <td>0.942</td> <td>0.965</td> </tr> <tr> <td>DeBERTa-v3-large-1024</td> <td>persuade &amp; slimpajama</td> <td>0.922</td> <td>0.957</td> </tr> <tr> <td>DeBERTa-v3-large-512 + TF-IDF</td> <td>persuade &amp; slimpajama</td> <td>0.942</td> <td>0.965</td> </tr> </tbody> </table> <h2><strong>Prompt Engineering</strong></h2> <p>My prompt for generating essays is as follows:</p> <pre><code>(100 points) [essay instruction] Your text should be around 500 words.\n\n\n\n Name: [random name based on gender] Grade: [grade] Date: [random date] [essay title] [one cleaned essay from persuade 2.0]\n\n\n\n Name: [another random name with the same gender] Grade: [same grade] Date: [same date] [essay title] </code></pre> <p>Grade and gender information were taken directly from persuade 2.0 dataset. I made 10 generations for each persuade 2.0 essay, and then filtered generations that are too similar to the sample human essay using embedding similarity and Levenstein distance. Generations were then further filtered by length and repetition.</p> <h2><strong>List of Failed Ideas</strong></h2> <ul> <li>Separate essay into list of semantic blocks, embed each block, then train a bi-LSTM on top of it. (score 0.763)</li> <li>Same architecture, but train LSTM on the differences of each pair of neighboring blocks. (score 0.685)</li> <li><a href="https://arxiv.org/pdf/2005.01229.pdf" target="_blank">Robust Encoding</a> to combat character-level adversarial attacks.</li> <li>Sliding window inference and averaging the logits. (score 0.929)</li> <li>MLM pretraining on the test set (10 epochs with deberta-v3-xsmall), then finetune on my dataset. (score 0.689)</li> <li>And a lot more ideas that didn't even pass local CV.</li> </ul> <h2><strong>Dataset and Code</strong></h2> <h3>Training and Dataset Generation Code</h3> <p><a href="https://github.com/Tailen/Kaggle-Detect-AI-Generated-Text-7th-Solution" target="_blank">https://github.com/Tailen/Kaggle-Detect-AI-Generated-Text-7th-Solution</a></p> <h3>Datasets</h3> <p><a href="https://www.kaggle.com/datasets/tailen/persuade-corpus-ai-generated-dataset" target="_blank">https://www.kaggle.com/datasets/tailen/persuade-corpus-ai-generated-dataset</a><br> <a href="https://www.kaggle.com/datasets/tailen/slimpajama-ai-generated-parallel-dataset" target="_blank">https://www.kaggle.com/datasets/tailen/slimpajama-ai-generated-parallel-dataset</a></p> <h3>Inference Code</h3> <p><a href="https://www.kaggle.com/code/tailen/daigt-deberta" target="_blank">https://www.kaggle.com/code/tailen/daigt-deberta</a></p> <p>The 25k essays were generated using the prompt in my post. The 400k non-essay texts were generated with the first 100 blocks of SlimPajama using the same method mentioned in <a href="https://www.kaggle.com/competitions/llm-detect-ai-generated-text/discussion/465882" target="_blank">James' post</a>. I removed the books and code entries, filtered them by length, then randomly chose 10 tokens from each text as the seed prompt. I used vLLM for generation and was getting around 1 second per completion for Falcon-7B and Mistral-7B, and 3 seconds per completion for Llama2-7B on a 3090. I will share the generation script later.</p>
LLM - Detect AI Generated Text
[8th LB Solution] Linguistic Features: PPL & GLTR
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LLM - Detect AI Generated Text <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First off, I’d like to thank Kaggle, The Learning Agency Lab and <a href="https://www.kaggle.com/cookiecutters" target="_blank">@cookiecutters</a> for such an interesting competition! I definitely learnt a lot from this competition. I personally think this is a huge accomplishment for me, especially since I graduated from my Bachelors just 5 months ago.</p> <p>I feel incredibly blessed to have experienced such a significant shake-up! While I anticipated a slight bump upwards, I did not expect the solution to generalize this well!</p> <p>This is my 2nd competition and very first discussion post. Apologies in advance if I have said/done something wrong. Honest feedback would be appreciated!</p> <p><strong>TL;DR</strong>: <a href="https://huggingface.co/spaces/abdullahmeda/detect-ai-text" target="_blank">huggingface.co/spaces/abdullahmeda/detect-ai-text</a></p> <h2>Linguistic Analysis: Language Model Perplexity</h2> <p>The perplexity (PPL) is commonly used as a metric for evaluating the performance of language models (LM). It is defined as the exponential of the negative average log-likelihood of the text under the LM. A lower PPL indicates that the language model is more confident in its predictions, and is therefore considered to be a better model. The training of LMs is carried out on large-scale text corpora, it can be considered that it has learned some common language patterns and text structures. Therefore, PPL can be used to measure how well a text conforms to common characteristics. </p> <p>I used all variants of the open-source GPT-2 model except xl size to compute the PPL (both text-level and sentence-level PPLs) of the collected texts. It is observed that, regardless of whether it is at the text level or the sentence level, the content generated by LLMs have relatively lower PPLs compared to the text written by humans. LLM captured common patterns and structures in the text it was trained on, and is very good at reproducing them. As a result, text generated by LLMs have relatively concentrated low PPLs. </p> <p>Humans have the ability to express themselves in a wide variety of ways, depending on the context, audience, and purpose of the text they are writing. This can include using creative or imaginative elements, such as metaphors, similes, and unique word choices, which can make it more difficult for GPT2 to predict. The PPL distributions of text written by humans and text generated by LLMs are shown in the figure below.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F9737174%2F8ae67eaef95f46798229a571536f26d0%2Fdownload-13.png?generation=1706012738497005&amp;alt=media"></p> <h2>GLTR: Giant Language Model Test Room</h2> <p>This idea originates from the following paper: arxiv.org/pdf/1906.04043.pdf. It studies 3 tests to compute features of an input text. Their major assumption is that to generate fluent and natural-looking text, most decoding strategies sample high probability tokens from the head of the distribution. I selected the most powerful Test-2 feature, which is the number of tokens in the Top-10, Top-100, Top-1000, and 1000+ ranks from the LM predicted probability distributions.</p> <h2>The Winning Solution</h2> <p>The brains behind this implementation is the following paper: arxiv.org/pdf/2301.07597.pdf. Features from the 2 aforementioned techniques were combined and a VotingClassifier was trained. I first computed the features independently on the 44k daigt v2 dataset but had only gained an ROC AUC Score of less than 0.7. Using the 800k dataset shared by <a href="https://www.kaggle.com/starblasters8" target="_blank">@starblasters8</a> was the real deal breaker. The scores had shot up to 0.8 and 0.9 for PPL and GLTR respectively. Combining them and using bigger models gave better results. Though CV was calculated, it was rather useless to track as it always close to 1. Following is a breakdown of the scores for all my solution iterations:</p> <table> <thead> <tr> <th>Data</th> <th>Features</th> <th>Model</th> <th>Public LB</th> <th>Private LB</th> </tr> </thead> <tbody> <tr> <td>44K</td> <td>PPL</td> <td>GPT-2 Small</td> <td>0.674</td> <td>0.506</td> </tr> <tr> <td>44K</td> <td>GLTR</td> <td>GPT-2 Small</td> <td>0.637</td> <td>0.445</td> </tr> <tr> <td>44K</td> <td>PPL + GLTR</td> <td>GPT-2 Small</td> <td>0.661</td> <td>0.465</td> </tr> <tr> <td>800K</td> <td>PPL</td> <td>GPT-2 Small</td> <td>0.800</td> <td>0.656</td> </tr> <tr> <td>800K</td> <td>GLTR</td> <td>GPT-2 Small</td> <td>0.910</td> <td>0.918</td> </tr> <tr> <td>800K</td> <td>PPL + GLTR</td> <td>GPT-2 Small</td> <td>0.923</td> <td>0.926</td> </tr> <tr> <td>800K</td> <td>PPL + GLTR</td> <td>GPT-2 Medium</td> <td>0.932</td> <td>0.953</td> </tr> <tr> <td>800K</td> <td>PPL + GLTR</td> <td>GPT-2 Large</td> <td>0.938</td> <td>0.956</td> </tr> </tbody> </table> <ul> <li>Numerous techniques combined with the above often tended to negatively affect the score such as:<ul> <li>Training my gpt2 tokenizer on vocabulary of the test set</li> <li>Using the T5 DeObfuscator</li> <li>Log likelihoods as a feature (Didn’t research much on this)</li> <li>PySpellChecker to correct typos</li> <li>Textual statistical scores such as the flesch_reading_ease, flesch_kincaid_grade, smog_index, coleman_liau_index among others</li></ul></li> </ul> <p>Features and scikit-learn models were persisted using joblib and called on inference to save time. An average submission using this approach just took about <strong>10 minutes</strong>! </p> <h2>What didn’t work?</h2> <h3>1. LLM Finetuning</h3> <p>Inspired by Hello-SimpleAI/chatgpt-detector-roberta, I had fine-tuned this model itself along with a roberta-large using the 800K samples dataset provided by <a href="https://www.kaggle.com/starblasters8" target="_blank">@starblasters8</a> . I had high hopes for this approach but got scores no higher than 0.793 and 0.674 on the Public and Private LB respectively, although it had a cv close to 0.98. </p> <p>Looking at <a href="https://www.kaggle.com/competitions/llm-detect-ai-generated-text/discussion/465882" target="_blank">this discussion</a> by <a href="https://www.kaggle.com/jsday96" target="_blank">@jsday96</a> , I figured using a slightly bigger model such as a deberta-v3-large along with extending the 800k dataset to include other open datasets such as Pile or SlimPajama would improve the score. Unfortunately, I had to drop that idea since I neither had the experience nor the computational power to deal with so much data!</p> <h3>2. Custom Tokenizer &amp; TF-IDF</h3> <p>Though the public TF-IDF approach produced a very tempting score, I suspected it was overfitting to the public test set. However, I did not expect it to fall as low as 0.89. On the contrary, my best unselected submission scored 0.964 which was a blend of TF-IDF along with the PPL &amp; GLTR approach with an equal weightage</p> <h2>Code Access</h2> <p>That code can found at <a href="https://www.kaggle.com/code/abdullahmeda/8th-lb-solution-ppl-gltr-features" target="_blank">kaggle.com/code/abdullahmeda/8th-lb-solution-ppl-gltr-features</a></p> <h2>References</h2> <ul> <li><a href="https://arxiv.org/pdf/2301.07597.pdf" target="_blank">https://arxiv.org/pdf/2301.07597.pdf</a></li> <li><a href="https://arxiv.org/pdf/1906.04043.pdf" target="_blank">https://arxiv.org/pdf/1906.04043.pdf</a></li> <li><a href="https://github.com/HendrikStrobelt/detecting-fake-text" target="_blank">https://github.com/HendrikStrobelt/detecting-fake-text</a></li> <li><a href="https://github.com/nlp2ct/llm-generated-text-detection" target="_blank">https://github.com/nlp2ct/llm-generated-text-detection</a></li> <li><a href="https://huggingface.co/Hello-SimpleAI/chatgpt-detector-roberta" target="_blank">https://huggingface.co/Hello-SimpleAI/chatgpt-detector-roberta</a></li> <li><a href="https://github.com/Hello-SimpleAI/chatgpt-comparison-detection" target="_blank">https://github.com/Hello-SimpleAI/chatgpt-comparison-detection</a></li> </ul> <p>Last but not least, was this discussion LLM Generated? 🤖🤣</p> <p>I had not researched on this topic prior to joining this competition. To be specific, this direction of research was adopted completely just during the last 15 to 20 days</p> <p>I had used models of the gpt-2 family as that is what was suggested in the referenced paper. I had also begun experimenting with gpt-3 replications by ElutherAI, but it had caused an error too hard/time-consuming to debug, atleast for me, so had to drop that idea. I had used the raw model itself without any finetuning. The model you had mentioned at the link is correct!</p> <p>I have only used the raw model, but now that you mention it, I wonder if fine-tuning would have produced better results!</p> <p>As for the shift operation, it is used to align the model’s predictions (logits) with the actual tokens (input_ids) for the purpose of calculating the loss. I believe changing it to 2 or 3 would mean that the model is trying to predict the token after the next one or the token after the next two tokens, given the previous tokens</p> <p>Thank you for the compliment! As for the efficiency LB, i think the solution wouldn't be applicable as it utilizes GPU</p> <p>Sorry about that! I have updated all the required permissions</p> <p>The model training was rather simple. I had used an XGBClassifier, LGBMClassifier, CatBoostClassifier and RandomForestClassifier with default parameters. Tuning them might have provided a better result, although I didn't want to tune them too much that I might overfit to the public test set</p> <pre><code>%%time xgb = XGBClassifier(random_state=) lgb = LGBMClassifier(random_state=) cat = CatBoostClassifier(random_state=) rfc = RandomForestClassifier(random_state=) clf = VotingClassifier( n_jobs=-, voting=, weights=[, , , ], estimators=[ (, xgb), (, lgb), (, cat), (, rfc) ] ).fit(X_train, y_train) sub[] = clf.predict_proba(X_test)[:, ] </code></pre>
LLM - Detect AI Generated Text
[Solution of 4th Place in Efficiency LB]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LLM - Detect AI Generated Text <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Firstly, I would like to thank Kaggle and THE LEARNING AGENCY LAB for hosting such an exciting competition. During this competition, I have been following the top solutions in the discussion area. These ideas are very cool, and I have learned a lot from them. I am deeply appreciative of the participants who generously shared their insights and observations.</p> <p>My solution was simple, yet to my surprise, it secured the 4th place in the private Efficiency LB. From the perspective of the Public Efficiency LB, this achievement seemed almost impossible…</p> <p>I would like to extend my thanks to <a href="https://www.kaggle.com/datafan07" target="_blank">@datafan07</a> for providing <a href="https://www.kaggle.com/datasets/thedrcat/daigt-v2-train-dataset" target="_blank">DAIGT-V2</a>. I also acknowledge <a href="https://www.kaggle.com/mustafakeser4" target="_blank">@mustafakeser4</a> for sharing a high-scoring <a href="https://www.kaggle.com/code/mustafakeser4/inference-detectai-distilroberta-0-927" target="_blank">Bert model</a> and the source of the training data. Although I did not utilize the Bert model in my final submission, I incorporated the data as external training data. The results clearly indicate that this external data was the key factor behind this surprise.</p> <h4><strong>Solution Summary:</strong></h4> <p>I used multiple classifiers to classify the TFIDF features. Our code comes from</p> <p><a href="https://www.kaggle.com/code/batprem/llm-daigt-cv-0-9983-lb-0-960" target="_blank">https://www.kaggle.com/code/batprem/llm-daigt-cv-0-9983-lb-0-960</a>,</p> <p>thanks to <a href="https://www.kaggle.com/batprem" target="_blank">@batprem</a> for sharing.</p> <p>At the beginning of my participation in the competition, I observed that the leaders in the Public LB often also topped the Efficiency LB. This led me to hypothesize that utilizing TFIDF alone could simultaneously yield high scores and efficiency. Throughout the remainder of the competition, my focus was on enhancing the score as much as possible without compromising the algorithm’s efficiency.</p> <p>Here is a summary of my solution:</p> <ol> <li>I found that Cat was the primary cause of the algorithm’s slowdown. Consequently, I removed Cat while retaining lgb. This action reduced the algorithm’s runtime from approximately 2 hours to about 15 minutes.</li> <li>I added ComplementNB and LinearSVC as base classifiers, which could slightly improve the public LB score.</li> <li>I utilized DAIGT-V2 and the previously mentioned external training data. I ran the same feature extraction and classification algorithms on the two datasets separately and then ensemble them in a weight of 6:4. This resulted in a public score of 963 and a private score of 916. Besides, 75:25 yielded a public score of 964 and a private score of 906.</li> <li>Taking inspiration from <a href="https://www.kaggle.com/competitions/llm-detect-ai-generated-text/discussion/468150" target="_blank">the tricks for AUC metric</a>, I applied min-max normalization to each prediction result prior to integration, slightly improving both the public and private scores.</li> </ol> <p>I have also tried to mix DAIGT-V2 and the external data for training instead of blending. This method produced a public score of 955 and a private score of 934. Due to the low public score, I did not select this submission as the final result. In my experiments of using different weights in the ensemble, I found that the more the weight is biased towards the external dataset, the higher the private score, but the lower the public score, so I only tried 6:4 and did not continue to try larger weights. I believe that if only external data is used for training, or if a more significant weight is set, it is possible to achieve 1st place in the Efficiency LB.</p> <h4>Code</h4> <p><a href="https://www.kaggle.com/superfei/solution-of-4th-place-in-efficiency-lb" target="_blank">https://www.kaggle.com/superfei/solution-of-4th-place-in-efficiency-lb</a></p>
Large Scale Hierarchical Text Classification
Winning Solution Description
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Large Scale Hierarchical Text Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi,</p> <p>I hope everyone had a good competition. Our team won after a close fight between the top 3 contenders. We've written up a description of our models and the code that can be used to reproduce the winning solution. In brief:</p> <p>Our winning submission to the 2014 Kaggle competition for Large Scale Hierarchical Text Classification (LSHTC) consists mostly of an ensemble of sparse generative models extending Multinomial Naive Bayes. The base-classifiers consist of hierarchically smoothed models combining document, label, and hierarchy level Multinomials, with feature pre-processing using variants of TF-IDF and BM25. Additional diversification is introduced by different types of folds and random search optimization for different measures. The ensemble algorithm optimizes macroFscore by predicting the documents for each label, instead of the usual prediction of labels per document. Scores for documents are predicted by weighted voting of base-classifier outputs with a variant of Feature-Weighted Linear Stacking. The number of documents per label is chosen using label priors and thresholding of vote scores.</p> <p>The full description .pdf file is attached, and the code can be downloaded from:&nbsp;https://storage.googleapis.com/kaggle-competitions/kaggle/3634/media/LSHTC4_winner_solution.zip</p> <p>The above code package includes precomputed result files for the base-classifiers used by our ensemble. These take close to 300MB. A package omitting the base-classifier output files is also available: https://storage.googleapis.com/kaggle-competitions/kaggle/3634/media/LSHTC4_winner_solution_omit_resultsfiles.zip</p> <p>Feel free to ask any questions about our solution.</p> <p>Cheers,</p> <p>-Antti</p> <p>[quote=Junhui;54134]</p> <p>Didn't find params file &nbsp;&quot;templates/mnb_c_jm.template&quot; &nbsp;in the zip files.&nbsp;</p> <p>[/quote]</p> <p>Thanks for notifying. Attached is the mnb_c_jm.template file.</p> <p>[quote=Junhui;54403]</p> <p>thank you very much for your replying. I found some problems.</p> <p>MAKE_FILES : generate files in&nbsp;wikip_large_[0-9] folds, but RUN_DEVS read files from&nbsp;multi_label/wikip_large_[0-9] folds(there is no multi_label fold and no any script to copy folds into multi_label), which would need us to modify the scripts to run correctly.</p> <p>And btw there is no 'label_dev_cutoffs.txt' file.</p> <p>[/quote]</p> <p>Just remove &quot;multi_label/&quot; from the path names. The files were originally in three different directories, one for segmenting data, one for running base-classifiers and one for the ensemble combination. These were merged for the system description, so some&nbsp;path names can be slightly off.</p> <p>The label_dev_cutoffs.txt was for a new type of model left out from the final combination. These gave very large improvements on local tests, but failed on the leaderboard data. There wasn't enough time to find out why these didn't work.</p>
Learning Equality - Curriculum Recommendations
10th place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Learning Equality - Curriculum Recommendations <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Correlated topics paths are:<br> Topic path: the reverse topic tree (like the example in the post)<br> Correlated topics paths are the topics which the content was associated with in the correlations.csv files. I will add an example to the note to illustrate it</p> <p>I want to thank the organizer for a fun competition for an important mission. I learned a lot through participating in it. Thanks also for everyone who is sharing the solutions, I've been learning a lot from the many approaches being presented.</p> <p>My solution had 3 stages which followed the most popular paradigm in this competition- retrieval, reranker (cross-encoder) and a stage to calibrate the results on the validation set for threshold selection.</p> <p>A high level diagram of the pipeline:<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F22991%2F2d73046973691aa2e02c3578f984eed2%2FScreenshot%202023-03-16%20at%2012.19.14%20AM.png?generation=1678951197748902&amp;alt=media" alt=""></p> <p>My overall strategy was to keep stage1 inference simple and fast and do the heavy lifting in stage2 with the heavier transformer models. </p> <h2>Stage 1: Bi-Encoder</h2> <p>The baseline model was sentence-transformers/paraphrase-multilingual-mpnet-base-v2 and iteratively fine tune it with better training sets. </p> <p><strong>NN</strong><br> The output of stage1 is a set of content candidates assigned to each topic. Candidate selection was based on similarity score threshold (not top-K neighbors). This was helpful in several ways:</p> <ol> <li>It helped avoid unnecessary processing for many irrelevant candidates</li> <li>Initial experiments showed it gave better stage 1 CV</li> <li>The score from stage 1 helped define “sampling regions” that I used in stage2 and 3. More on this later.</li> </ol> <p><strong>Stage 1 features:</strong><br> For each topic, I built a reverse topic tree from the training set. I’ll call it [topic path] in the rest of the note:<br> (example)<br> /12. 20: Bird Reproduction/12: Vertebrates/Book: Introductory Biology/Introductory and General Biology/Bookshelves/Libretext Open Educational Resource Library</p> <p>I used the following columns concatenated to strings as the input for the models:<br> Content: [Kind]&lt; K &gt;[title]&lt; D &gt;[description]&lt; T &gt;[text]<br> Topic: [topic path]&lt; O &gt;[description]</p> <p><strong>Training</strong><br> The key for success in stage 1 was finding an optimal training set. The challenge has been how to select the negative samples for training as naive sampling leads to weak models. </p> <p>In order to create an optimal training set I created the following process:</p> <ol> <li>Predict topic-content assignments on the training set based on the latest trained model</li> <li>Use the topic-content assignments to generate a new training set. This would create a new training set with hard false positives on the top scores and easy false positives. The threshold in which I would cut off candidate would be when reaching an average of ~50 candidates per topic. </li> <li>Add all the missing topic-content assignments from the training set</li> <li>Train a new model with the new training set.</li> <li>Go to step 1</li> </ol> <p><strong>Multiple Negative Ranking Loss</strong><br> One of the key decision that helped my CV/LB early on was to use Multiple Negative Ranking Loss, which helped accelerate training significantly. From the training process described above, the false positive candidates work great with MNRL because in one batch we can have a good blend of hard to easy samples across multiple topics. There is one problem though- one of the requirements of MNRL is that for each sample pair (a_i, p_i): all p_j (j!=i) and all n_j are considered negative, which is likely to break especially in case of hard negative samples. In order to make it work I created a carefully crafted training set preparation process that “pre arranged” the batches before training started.</p> <p>For each batch:</p> <ol> <li>pick a positive topic-content assignment</li> <li>For each false positive content assignment for the above topic, find a correct topic to pair to that content and add it to the training set. Remember previously encountered topics and skip topic-content pairs where the topic has already been seen.</li> <li>Repeat the above process, in case there are no more valid samples to choose, pick another random topic-content pair.</li> <li>In case of a dead end- pick a sample from another language. And start over. Conveniently topic-content assignment are always valid within the same language so there are no collisions across languages.</li> </ol> <p>I passed the training data as-is to the data loader without shuffling where the batch size is the same one I used in the pre-processing step.</p> <p><strong>Contrastive Loss</strong><br> After training enough iterations of the above process (eventually F2/Precision/Recall stops improving), I took the best model and further fine tuned it with contrastive loss on the latest training set using a standard training process with the training data as is.</p> <p>For stage1 the best F2 score in my CV is 0.65. I didn’t submit a solution based on stage1 when getting to these results so I don’t have the corresponding LB scores.<br> The average number of candidates per topic vs. recall at different threshold was<br> (threshold : avg. number of candidates : recall)<br> 0.91 : 9.7 : 0.77<br> 0.9 : 16.45 : 0.81 <br> 0.89 : 29.8 : 0.85 <br> 0.88 : 55.7 : 0.89<br> 0.87 : 103.4 : 0.91</p> <h2>Stage2: Cross-Encoder</h2> <p>I’ve been thinking about this step much like a “zero shot” learning approach because the topics in the test set were not seen in the training set. Furthermore, there is additional content that was not in the training set.</p> <p><strong>Feature engineering</strong><br> From each topic-conent pair from the previous stage (both for positive and negative labels) I generated two samples:</p> <p><strong>Training Sample 1:</strong><br> Same as the one for stage one but topic and content are concatenated and separated by the relevant [SEP] token</p> <p><strong>Training Sample 2:</strong><br> An addition that I had in this stage is to add the correlated topic paths of the content to the content features:</p> <p>content: [Kind]&lt; K &gt;[title]<strong>&lt; C &gt;[correlated topic path 1]&lt; C &gt;[correlated topic path 2] .. &lt; C &gt;[correlate topic path n]</strong>&lt; D &gt;[description]&lt; T &gt;[text]</p> <p>example:<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F22991%2F9a20680335b1ba8fb0a898709e52340e%2FScreenshot%202023-03-16%20at%2010.27.09%20PM.png?generation=1679030880846395&amp;alt=media" alt=""></p> <p>The topic text is exactly the same as stage 1:<br> Topic: [topic path]&lt; O &gt;[description]<br> example:<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F22991%2F55d96f0f428032ca81d45791dcb97baa%2FScreenshot%202023-03-16%20at%2010.27.18%20PM.png?generation=1679030911527793&amp;alt=media" alt=""></p> <p>The final input text is concatenation of both the topic and content separated by the [SEP] token relevant for the model.</p> <p>The reason for the two variants is that I wanted to make sure that the model have two learned capabilities:<br> Predict topic-content assignment based on text features alone - this would help with new unseen content in the test set.<br> Predict topic-content assignments based on relationship of the content to other topics. This helped resolve assignment ambiguities where the attributes of the content alone were not informative enough to decide whether a certain content is relevant to the topic or not (e.g. similar math concept but for different grades)</p> <p><strong>Sampling</strong><br> The cross-encoder was hard to train. The key for successful training was the sampling strategy. Similar to stage 1 it’s also important to find a good balance between hard and easy negatives so the model generalizes well with high predictive power.</p> <p>The best strategy that worked for me was - I took samples from stage one where the threshold was above 0.89. Between 0.88 and 0.89 I sampled 0.1 of the negative samples and below 0.88 I sampled 0.02 of the negative samples. I always added all the positive samples</p> <p><strong>Models:</strong><br> I trained 4 models:<br> XLM-Roberta-Base, seq max length 256<br> DeBERTa-v3-xsmall, seq max length 512<br> DeBERTa-v3-small, seq max length 512<br> DeBERTa-v3-base, seq max length 400</p> <p>It’s interesting to point out that although the Deberta models are English only models their tokenizers had the non-english character sequences and they were able to train well on the competition dataset. I didn’t use mdeberta because I was unsuccessful in making it work with fp16 training and training was too slow.</p> <p>One interesting observation is that more epochs consistently resulted in higher LB scores both in public and private LB. I suspect that I had room to further improve scores by spending more hours in training more epochs, but eventually I ran out of time. And it was also becoming expensive given that I was using Colab Pro+ credit for training.</p> <h2>Stage3: Logistic Regression</h2> <p>The last model is taking the scores from stage1, the models from stage2 and trained on the validation set to produce the final score for topic-content assignments. Due to training on the validation set and using the results to find the threshold for maximizing F2 I decided to use a linear model in order to have low model complexity and less likelihood of overfitting. </p> <p>The approach I took here for creating separate features for each stage 1 threshold and further split the features based on whether the content was in the training set or not.<br> It looks something like that.</p> <p>Here is a snapshot of the coefficients from the best model to illustrate the above</p> <p>Threshold above 0.95:<br> Content In Training Set:<br> stg_1_score, xlmr_score, deb_xs_score, deb_s_score, deb_base_score<br> 0.16605078, 0.67237598, 3.0603439 , 0.92530175, 2.55978453, </p> <p>Content Not In Training Set:<br> stg_1_score, xlmr_score, deb_xs_score, deb_s_score, deb_base_score<br> 3.01727545, 1.06343818, 0.9088478 , 0.33693999, 2.01195161,</p> <p>Threshold between 0.94 and 0.95:<br> Content In Training Set:<br> stg_1_score, xlmr_score, deb_xs_score, deb_s_score, deb_base_score<br> 0.25057614, 1.64266565, 1.98558391, 0.50577352, 1.99118334,</p> <p>Content Not In Training Set:<br> stg_1_score, xlmr_score, deb_xs_score, deb_s_score, deb_base_score<br> 2.60065444, 0.93282139, 0.68057603, 0.82555645, 1.23466772,</p> <p>For the submission I selected a different threshold that maximizes F2 on the validation set for content in the training set and out of the training set.</p> <p>My best submission had CV 0.747, Private LB 0.741, Public LB: 0.708</p> <p><strong>Validation Set:</strong><br> For my evaluation set I sampled 10% of the topics in correlations.csv randomly, and removed half of their correlated content from the training set as well. In hindsight removing some channels in addition would have been better to mimic the distribution of the data in the competitions test set more closely. Because the validation set was used to calibrate the final predictions and the F2 maximizing thresholds I believe that a better validation set would directly translate to better LB scores.</p> <p>In this competition I used 1 fold CV shared across all stages. I was initially concerned about the high training cost for more than 1 fold, but over time I got more comfortable with the 1 fold CV due to a consistently good correlation between the CV and the LB results. I did have quite a bit of shake-up anxiety toward the end though so I’m happy it wasn’t a bad shakeup case.</p> <p>For training I used Sentence-Transformers for stage1, Hugging Face Transformers for stage2 and scikit-learn for stage3.</p> <p>Thanks and looking forward to more competitions!</p> <p>For new content I didn't add any correlated topic paths, I only added the correlated topics for the content that was already existing in the correlations.csv file that came with the training set. That is the reason that I created two training samples for each content when training the stage2 model: one with correlations and the other one without. That way the model learned to predict from both scenarios</p>
Learning Equality - Curriculum Recommendations
12th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Learning Equality - Curriculum Recommendations <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><a href="https://www.kaggle.com/ohanegby" target="_blank">@ohanegby</a> Thank you very much. And congratulations on your solo gold medal and promotion to master as well! I've been aiming for it since being surpassed halfway through, but couldn't catch up! You are strong. Let's meet at a competition somewhere.</p> <p><a href="https://www.kaggle.com/xbustc" target="_blank">@xbustc</a> Thank you, my friend! I was watching you improve your score quite a bit towards the end! </p> <p><a href="https://www.kaggle.com/conjuring92" target="_blank">@conjuring92</a> Thank you very much. Congrats too. Although I was surpassed in the end (maybe final day), I couldn't have achieved it without your input! Let's meet again at a competition somewhere!</p> <p>First of all, I would like to thank kaggle and the staff for hosting such an interesting competition.<br> I'm really happy that I achieved my goal of reaching the solo gold tier.</p> <h1>1. Summary</h1> <p>My solution consists of 3 stages. In the 1st stage, I created embeddings using Arcface and generated candidates. In the 2nd stage, I used a transformer-based rerank model with the distances from the first stage as features. In the 3rd stage, I used the 64-dimensional embeddings from the first stage obtained via SVD, distances, and predictions from the second stage to create rerank models using 1DCNN, LGBM, and MLP. I then performed rank ensemble, set a threshold, and selected final content ids. Finally, I added post-processing to fill in the gaps based on distances for cases where there were no content IDs.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4001300%2F520a3f29b326d89d04ca5d9010586fdc%2Fall2.jpg?generation=1680345980966777&amp;alt=media" alt=""></p> <h1>2. 1st stage Arcface</h1> <h2>2.0 Cross validation</h2> <p>I used stratified group k-fold to group the data up to grandparents into one group.</p> <h2>2.1 Feature engineering</h2> <p>My text creation was greatly boosted by <a href="https://www.kaggle.com/conjuring92" target="_blank">@conjuring92</a> post <a href="https://www.kaggle.com/competitions/learning-equality-curriculum-recommendations/discussion/376873" target="_blank">REF</a></p> <p>topics: Level + [sep] + title : description + [sep] + context + [sep] + children + [sep] + parent_description + [sep] + brother title<br> content : kind + [sep] + language + [sep] + title + [sep] + description + [sep] + text</p> <h2>2.2 MLM</h2> <p>I conducted MLM using the features described in section 2.1, and referred to this code <a href="https://www.kaggle.com/code/takamichitoda/disaster-tweets-mlm" target="_blank">REF</a> for the implementation. Thank you.<br> MLM boosted my cv + 0.010 @ 1fold. But, it was necessary to verify that the model performed well on all folds (not execute).</p> <h2>2.3 Arcface architecture</h2> <p>I believe that using Arcface was the most distinctive feature of my solution.</p> <p>The following is a dataset for training.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4001300%2F9bf76d463dd7470f29dff26fb129e8db%2Fdataset.jpg?generation=1678856223409857&amp;alt=media" alt=""></p> <p>The following is an overview of the 1 iteration process.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4001300%2F53e52815e275533a0dc58ab3d3619fa6%2Farcface.jpg?generation=1678856242365325&amp;alt=media" alt=""></p> <p>First, I set the topic id and content id as outputs, and set the topic id as input. Then, I calculate the loss for each output (loss1, loss2).<br> Next, I set the content id as input, and calculate the loss for each output (loss3, loss4).<br> Finally, I took the average of the four losses, perform backpropagation, and train the model.</p> <p>The number of epochs was approximately 30 epochs, the margin was 0.0001, and the value of s was adjusted depending on the model (around 10-15). </p> <h2>2.4 Ensemble</h2> <p>I created nine models including xlm-roberta-large, xlm-roberta-base, and mdeberta-v3-base, and then concatenated the outputs of these models to perform an ensemble.<br> One of them, I used the pseudo labeling. I used the items with 'has content' equals False. While individually weak, they proved effective when combined in the ensemble.</p> <p>In order to avoid out of memory, I must devide the topics and content… (Adjusting the bugs was very difficult.)</p> <h2>2.5 Using fulltrain</h2> <p>Using fulltrain was also one of my features. Initially, I used 4kfold, but I realized that fulltrain was extremely powerful. In the end, I did not use 4kfold to calculate distance and instead used the full train models trained with 3 different seeds (public LB + 0.007).</p> <h2>2.6 (just reference 1st stage + rule base submit result)</h2> <p>cv : 0.65289, public lb : 0.695, private lb : 0.732</p> <h1>3. 2nd stage transformer base rerank</h1> <p>I found comments about overfitting in the discussion, but I did not experience it. I used folds consistently with the 1st stage. Moreover, it was mentioned in some discussions that reranking using transformers works up to a certain point, but beyond that, it no longer works. I also experienced it. However, by adding the distance from the 1st stage as input, I was able to obtain more cv results.</p> <h2>3.1 Feature engineering</h2> <p>I set the input as follows:<br> Simirarity : str (int((1-distance)*1000)) + [sep] + topics title : topics description + [special original defined sep] + topics context + [sep] + content title : content description : content text</p> <h2>3.2 Ensemble</h2> <p>model1 : xlm-roberta-large cv : 0.66418<br> model2 : sentence-transformers/paraphrase-xlm-r-multilingual-v1 cv : 0.66039</p> <p>model1 * 0.7 + model2 * 0.3 = cv 0.6668 ,public lb : 0.70118, private lb : 0.73884</p> <h1>4. 3rd stage LGBM,1dcnn, MLP</h1> <p>In the 3rd stage, we used the SVD 64-dimensional embeddings generated in the 1st stage, as well as language, distance, and predictions generated in the 2nd stage, as features. The results for each are as follows.</p> <p>LGBM : cv 0.6644<br> 1dcnn : cv 0.66450<br> mlp : cv 0.663156</p> <h1>5. Rank ensemble for final submission and postprocess</h1> <p>I performed a mean ensemble of the results obtained in sections 3.2 and 4, ranked by their respective scores.<br> In the post-processing, for the topics in which there were no results above the threshold, I established a specific number per language and used a filling technique based on the distance in the 1st stage.</p> <p>final cv : 0.668755, public lb : 0.7023, private lb : 0.74044 (12 th)</p> <h1>6. Not working for me</h1> <ul> <li>Changes in margin and s for each epoch at Arcface</li> <li>AWP</li> <li>augmentation by mixup</li> <li>Catboost, XGboost, Tabnet</li> <li>Knowledge Distillation</li> </ul> <h1>7. Acknowledgments</h1> <p>I couldn't get this score on our own. I am grateful to those who shared their knowledge in the past, those who teamed up with me, and everyone else! I respect to you.</p> <p>Special thanks to this competition (using the code, dataset, and strategy)<br> <a href="https://www.kaggle.com/conjuring92" target="_blank">@conjuring92</a>, <a href="https://www.kaggle.com/takamichitoda" target="_blank">@takamichitoda</a>, <a href="https://www.kaggle.com/ragnar123" target="_blank">@ragnar123</a>, <a href="https://www.kaggle.com/yasufuminakama" target="_blank">@yasufuminakama</a> (fb3 notebook) </p> <p><a href="https://www.kaggle.com/toyou2u" target="_blank">@toyou2u</a> Thank you very much. And congratulations on your promotion to master. If my calculations are correct, it is 0.90792 in all validation data (fold0 : 0.922, fold1 : 0.903, fold2 : 0.916, fold3 : 0.889). My validation data does not include the source.</p> <p><a href="https://www.kaggle.com/trushk" target="_blank">@trushk</a> Thank you very much! And congratulations on winning the gold medal. Your team showed usual strength, especially final day! You're right, it was hard work! I'll take a break and then participate in another competition!!</p> <p><a href="https://www.kaggle.com/gauravbrills" target="_blank">@gauravbrills</a> Thank you very much! This is a good question. It's just experimental results. In my case, not including language in the topics slightly improved the score for fold 1.</p> <p><a href="https://www.kaggle.com/siddharthkumarsah" target="_blank">@siddharthkumarsah</a> Thank you very much! Let's meet again at some other competition.</p> <p><a href="https://www.kaggle.com/syhens" target="_blank">@syhens</a> Thank you. And congratulations to you too!</p> <p><a href="https://www.kaggle.com/takamichitoda" target="_blank">@takamichitoda</a> Thank you very much. I learned a lot from your blog and notebook. And it always helps. Thank you for the easy-to-understand explanation!</p>
Learning Equality - Curriculum Recommendations
18th place Solution -Thank You !
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Learning Equality - Curriculum Recommendations <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Many thanks to organizers for setting up this competition which I believe not many competition we will find of this kind. It was amazing to see 1st rank solution based on mere traditional machine learning quite a number of things to learn from . <br> I thank all my team members working hard on this competition <a href="https://www.kaggle.com/iafoss" target="_blank">@iafoss</a> <a href="https://www.kaggle.com/rohitsingh9990" target="_blank">@rohitsingh9990</a> specially <a href="https://www.kaggle.com/evgeniimaslov2" target="_blank">@evgeniimaslov2</a> whose last 2 weeks efforts put us in top 20. Lot more new things as take away from other Top solution<br> Below is the outline of our solution<br> We used 2 stage solution approach with channel level 5 fold StratifiedKfold (Alighned quite well with LB)<br> <strong>Stage1</strong><br> 1) 64 seq length Paraphrase mpnet v2 Trained on Similarity loss (With different Seq l &amp; Pos only +Pos/Neg combination)<br> 2) 128 seq L Roberta L Trained on Arcface Loss (Pos only samples) <br> Max pos score of all our models on whole dataset was 0.95+ (Top 50) , while Validation fold was 0.85+<br> <strong>Stage2</strong><br> Ensemble of <br> 1) 64 seq Length 5 fold Paraphrase Mpnet v2 Pretrained on stage1 with Top 50 Neighbors Trained on Contrastive loss ( This gave significant boost compared to ReRanker based public approach) ( This boosted score from 0.57 to straight away 0.657</p> <p>2) 64 Seq length 5 fold Roberta L as classifier trained on same contrastive loss Top 64 Neighbors from corresponding Roberta model of stage1 </p> <p>3) 256-352 Seq length 5 fold ReRanker trained using public kernel approach Reducing some of FPs count of Model 1 (This boosted the score by 0.01) from 0.657 to 0.66-0.67 series</p> <p>4) Finally Light GBM This took the score to 0.68 series ( a final boost)</p> <p><strong>Micro level Approaches</strong><br> 1) Quite a number Top N selections made for stage2 based on similarity ranking/ CV boost for Stage2 using these approaches <a href="https://www.kaggle.com/evgeniimaslov2" target="_blank">@evgeniimaslov2</a> can throw some more light on this<br> 2) Light GBM based on prob features (Ranking based) and other Train features like Kind, category etc after stage2 classification . This gave boost of around 0.02 to land us to our current score. using light gbm our CV reached to 0.74 to get private lb of 0.721 quite aligned.</p> <p>All in all it was progressive move using above approaches during entire period of competition.</p> <p>Regards<br> Jaideep</p>
Learning Equality - Curriculum Recommendations
1st Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Learning Equality - Curriculum Recommendations <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks <a href="https://www.kaggle.com/ilu000" target="_blank">@ilu000</a> Congrats to you too!</p> <p>I set my pipeline when my retriever model was weak. At that time it had improved. And initially my main focus was efficiency track and filtering by language was making my pipeline faster. I didn't try removing this filter at the end, maybe I could get some improvement. </p> <p>Thanks. I believe you will get a medal next time. You may still use Epicurus for teaming up with people:)</p> <p>Thank you <a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a> I couldn't stay away when the problem was very similar to the app I am building:)</p> <p>Thanks <a href="https://www.kaggle.com/chumajin" target="_blank">@chumajin</a> and congrats with your solo gold!</p> <p>Thanks <a href="https://www.kaggle.com/ohanegby" target="_blank">@ohanegby</a>. Congrats with your solo gold and upcoming title! </p> <p>Thank you <a href="https://www.kaggle.com/churkinnikita" target="_blank">@churkinnikita</a> and congrats to you too!</p> <p>Thanks <a href="https://www.kaggle.com/conjuring92" target="_blank">@conjuring92</a> Congrats to you too! Good that I distracted you with my team name:) Unfortunately I didn't try any other losses since I expected ArcFace to work the best because it compares with all contents at the same time.</p> <p>Thank you <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> !</p> <p>Thank you <a href="https://www.kaggle.com/philippsinger" target="_blank">@philippsinger</a> Congrats to you guys with your very successful sprint. Your momentum scared me:)</p> <p>Congrats to you too! I guess there were some key features to include.</p> <p>Thank you <a href="https://www.kaggle.com/titericz" target="_blank">@titericz</a> ! Meeting you in real life helped me to stay motivated on Kaggle:)</p> <p>Thank you <a href="https://www.kaggle.com/shujun717" target="_blank">@shujun717</a> I just checked your profile, you seem to continue rocking when I was away. Congrats with your recent results!</p> <p>Retriver CV was the one fold split I explained above. Stage 2 LGBM CV was a groupkfold on retriever's validation set.</p> <p>I calculated similarity scores only between same language items.</p> <p>Yes. 4 fold split on 1/3 of non-source topics (like 1/6 of whole data). Diversifying my second LGB fold split seed improved like 0.001, so I guess more data would also improve a bit. I didnt have enough resources to train retriever models for all folds, therefore I couldnt do this. But since one feature is very dominant, I wouldnt expect a very big improvement.</p> <p>I had given a long break on Kaggle and came back with this competition because I thought it was very relevant to the app I am building, namely Epicurus. This break made me miss Kaggle and kept me motivated in this competition</p> <p>My actual solution and efficiency solution are very similar. So I will describe both of them at the same time. I hope I can refactor my code and share on Github soon.</p> <p><strong>Pipeline:</strong><br> Candidate Selection (Retriever methods) -&gt; Feature Engineering -&gt; Lightgbm -&gt; Postprocessing</p> <p><strong>Validation Scheme:</strong></p> <ul> <li>1 fold validation</li> <li>All source topics and random 67% of the other topics are selected for the training set. The rest are validation topics.</li> <li>The contents which only match with validation topics are excluded from the training set.<br> For evaluation, validation topics are matched with all the contents and competition metric is calculated.</li> <li>While training lightgbm model on the candidates, group4fold on topic_id is used on the validation set. Evaluation is done on the whole validation set afterwards.</li> </ul> <p>At the end of the competition, I had 0.764 validation score and 0.727 LB. While it is a big gap, improvements in my validation score were almost always correlated with LB. And I got my validation score as my Private LB score, which I didnt expect.</p> <p>Edit: Efficiency model got 0.718 validation, 0.688 Public LB, 0.740 Private LB and around 20 minutes CPU run-time.</p> <p><strong>Topic/Content Representation</strong></p> <p>Each topic is represented as a text using its title, its description and its ancestor titles up to 3 parents above in the tree. Example: </p> <p>'Triangles and polygons @ Space, shape and measurement @ Form 1 @ Malawi Mathematics Syllabus | Learning outcomes: students must be able to solve problems involving angles, triangles and polygons including: types of triangles, calculate the interior and exterior angles of a triangle, different types of polygons, interior angles and sides of a convex polygon, the size and exterior angle of any convex polygon.'</p> <p>Each content is represented as a text using its title, its kind and its description (its text if it doesn’t have a description). Example:</p> <p>'Compare multi-digit numbers | exercise | Use your place value skills to practice comparing whole numbers.'</p> <p><strong>Candidate Selection</strong></p> <p><strong>TFIDF</strong><br> Char 4gram TFIDF sparse vectors are created for each language and matched with sparse_dot_topn, which is a package I co-authered (<a href="https://github.com/ing-bank/sparse_dot_topn" target="_blank">https://github.com/ing-bank/sparse_dot_topn</a>) It works very fast and memory efficient. For each topic, top 20 matches above 1% cosine similarity are retrieved.</p> <p><strong>Transformer Models</strong><br> I used paraphrase-multilingual-MiniLM-L12-v2 for efficiency track and ensemble of bert-base-multilingual-uncased, paraphrase-multilingual-mpnet-base-v2 (it is actually a xlm-roberta-base) and xlm-roberta-large for the actual competition. </p> <ul> <li>Sequence length: 64. But only the first half of the output is mean pooled for the representation vector. Last half is only fed for context. This worked the best for me.</li> <li>Arcface training: Training contents are used as classes. Therefore topics have multiple classes and l1-normalized target vectors. The margin starts with 0.1 and increases linearly to 0.5 at the end of 22 epochs. First 2 and last 2 epochs have significantly lower LR. Arcface class centers are initialized with content vectors extracted from pretrained models.</li> <li>Ensemble method: Concatenation after l2 normalization</li> </ul> <p>Edit: Models are re-trained with whole data for submission at the end.</p> <p>Top 20 matches within the same language contents are retrieved.</p> <p>In addition, for each topic, its closest train set topic is found and its content matches are retrieved as second degree matches.</p> <p><strong>Matches from Same Title Topics</strong></p> <p>For each topic, train set topics with the same title are found and their matched contents are retrieved.</p> <p><strong>Matches from Same Representation Text Topics</strong></p> <p>For each topic, train set topics with the same representation text are found and their matched contents are retrieved.</p> <p><strong>Matches from Same Parent Topics</strong></p> <p>For each topic, train set topics with the same parent are found and their matched contents are retrieved.</p> <p>All retrieved topic-content pairs are outer joined.</p> <p><strong>Feature Engineering</strong></p> <ul> <li>tfidf match score</li> <li>tfidf match score max by topic id</li> <li>tfidf match score min by topic id</li> <li>vector cosine distance</li> <li>vector cosine distance max by topic id</li> <li>vector cosine distance min by topic id</li> <li>topic title length</li> <li>topic description length</li> <li>content title length</li> <li>content description length</li> <li>content text length</li> <li>content same title match count</li> <li>content same title match count mean over topic id</li> <li>content same representation text match count</li> <li>content same representation text match count mean over topic id</li> <li>content same parent match count</li> <li>content same parent match count mean over topic id</li> <li>topic language</li> <li>topic category</li> <li>topic level</li> <li>content kind</li> <li>same chapter (number extracted from the text)</li> <li>starts same</li> <li>is content train</li> <li>content max train score</li> <li>topic max train score</li> <li>is content second degree match</li> </ul> <p><strong>Lightgbm Model</strong></p> <ul> <li>Hit or miss classification problem</li> <li>Overweight hit (minority) class</li> <li>Monotonic constraint and 2x feature contribution on most important feature: vector cosine distance</li> <li>2 diverse lightgbms: Excluded features which will potentially have different distribution on real test set in one of the models, vector cosine distance min by topic id. Also used slightly different parameters and kfold seed.</li> </ul> <p><strong>Postprocess</strong></p> <p>Postprocessing was very important. Using relative probabilities (gaps with highest matches) and using different conditions for train and test set contents were the key. While matching train set contents was like a classification problem, matching test set contents was like an assignment problem.</p> <p>Topic-content pairs are included if they have one of the conditions below:</p> <ul> <li>Content has the best matching probability among other contents for the given topic.</li> <li>Content is among the train set contents and has above 5% probability and has less than 25% gap with the highest matching probability in the given topic.</li> <li>Content is among the test set contents and has less than 5% gap with the highest matching probability in the given topic.</li> <li>Content is among the test set contents and the topic is its best match and its total gap* is less than 55%.</li> </ul> <p><strong>Code</strong></p> <p>All training notebooks: <a href="https://github.com/aerdem4/curriculum-recommendations" target="_blank">https://github.com/aerdem4/curriculum-recommendations</a><br> My actual inference notebook (v15 selected): <a href="https://www.kaggle.com/code/aerdem4/lecr-ensemble-v03" target="_blank">https://www.kaggle.com/code/aerdem4/lecr-ensemble-v03</a><br> My best Efficiency notebook: <a href="https://www.kaggle.com/code/aerdem4/lecr-efficiency-minilm" target="_blank">https://www.kaggle.com/code/aerdem4/lecr-efficiency-minilm</a><br> An alternative Efficiency solution: <a href="https://www.kaggle.com/code/aerdem4/lecr-efficiency-nobert" target="_blank">https://www.kaggle.com/code/aerdem4/lecr-efficiency-nobert</a></p> <p>Thank you <a href="https://www.kaggle.com/thedrcat" target="_blank">@thedrcat</a> I will Kaggle time to time, especially in winters:)</p> <p>Thanks. It was keeping 2GB max. Did you keep them in float32 or 64?</p> <p>Thank you very much for your support:) <a href="https://www.kaggle.com/fatihozturk" target="_blank">@fatihozturk</a> </p> <p>Thanks. I need to double check the recall on top20.</p>
Learning Equality - Curriculum Recommendations
24th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Learning Equality - Curriculum Recommendations <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you.</p> <p>No shakeup! Congrats to all the winners! I've learned a lot from this competition. Can't wait to see the winner solutions!!! For me, this is my first solo silver medal.</p> <p>I've published my inference code <a href="https://www.kaggle.com/code/xfffrank/lb-0-693-lecr-inference-two-stages-ensemble" target="_blank">here</a>.</p> <hr> <p>The image below shows a summary of my approach.</p> <p><strong>TLDR</strong>: An ensemble of 3 retrievers using max_length of 64 + top10 neighbors + 6 rerankers using mixed max_length. The reranker ensemble strategy is just a simple average.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1429419%2F29bf4a65cff09c08c709ece87eeb1c7d%2Fapproach%20summary.jpg?generation=1678838705022493&amp;alt=media" alt="Solution summary"></p> <ul> <li>Retriever candidates:</li> </ul> <table> <thead> <tr> <th>Model</th> <th>Batch size</th> <th>Validation recall@10</th> </tr> </thead> <tbody> <tr> <td>sentence-transformers/paraphrase-multilingual-mpnet-base-v2</td> <td>640</td> <td>0.79869</td> </tr> <tr> <td>sentence-transformers/all-distilroberta-v1</td> <td>1280</td> <td>0.78176</td> </tr> <tr> <td>xlm-roberta-base</td> <td>512</td> <td>0.77429</td> </tr> </tbody> </table> <ul> <li>Reranker candidates:</li> </ul> <table> <thead> <tr> <th>Model</th> <th>max length</th> <th>validation F2 score</th> </tr> </thead> <tbody> <tr> <td>MoritzLaurer/mDeBERTa-v3-base-mnli-xnli (with further pretraining)</td> <td>128</td> <td>0.6567</td> </tr> <tr> <td>sentence-transformers/paraphrase-multilingual-mpnet-base-v2 (with further pretraining)</td> <td>256</td> <td>0.6448</td> </tr> <tr> <td>sentence-transformers/paraphrase-multilingual-mpnet-base-v2 (with further pretraining)</td> <td>128</td> <td>0.6376</td> </tr> <tr> <td>MoritzLaurer/mDeBERTa-v3-base-xnli-multilingual-nli-2mil7</td> <td>128</td> <td>0.6361</td> </tr> <tr> <td>xlm-roberta-large (with further pretraining)</td> <td>128</td> <td>0.6581</td> </tr> <tr> <td>timpal0l/mdeberta-v3-base-squad2</td> <td>256</td> <td>0.6269</td> </tr> <tr> <td>Ensemble</td> <td>/</td> <td>0.6882 (public LB: 0.660; private LB: 0.693)</td> </tr> </tbody> </table> <h3>CV setup</h3> <p>Basically I followed what was proposed <a href="https://www.kaggle.com/competitions/learning-equality-curriculum-recommendations/discussion/376873" target="_blank">here</a>. </p> <ol> <li>Split the topics which are not from "source" into 4 folds. The folds were stratified by "channels" because the hosts mentioned that the test topics could from seen channels.</li> <li>Put the topics from fold 0 to the validation set.</li> <li>Put all the topics from the "source" category, and the topics not from fold 0, to the training set.</li> <li>Use the correlation file to get all the matches of the training topics, and use them as positive pairs for sentence transformers.</li> </ol> <h3>Retriever training (Stage #1)</h3> <p>Use <strong>separate encoders</strong> for "topic" and "content" because intuitively their distributions are different. It turned out that it improved the validation recall by 2% or above compared with using one unified encoder. </p> <p>My best retriever is trained with "paraphrase-multilingual-mpnet-base-v2", it achieved a recall@10 of <strong>~0.80</strong>.</p> <p>One thing to note is that using longer sequence to train did not give me benefits, so I used 64 in all models. Meanwhile, a large batch size is very important in this stage. It is worth sacrificing the information in longer sequences.</p> <p>With contrastive loss, it was way much worse than <code>MultipleNegativesRankingLoss</code>. I assume I didn't use it in the right way.</p> <h3>Retrieve (Stage #2 prep)</h3> <p>This is the tricky part. It is difficult to retrieve more possible candidates and introduce less noise in the meantime. Naïvely retrieving more to increase recall will only lead to much longer training time and no better performer.</p> <p>I used 3 retrievers in my ensemble. I retrieved top10 candidates with each one and take the union set. Altogether I've got recall = <strong>0.853</strong> for the validation set.</p> <h3>Reranker training (Stage #2)</h3> <ul> <li><p>Here I used the same train/val splits for topics as the stage #1.</p></li> <li><p>I found that using a pretraining can boost the performance of reranker. Here the pretraining means the retriever training using a unified encoder. And before training the reranker, load the pretrained checkpoint. This way allows the model to gain some domain knowledge. But it could lead to data leakage to some extent. In the final ensemble, I mixed rerankers with and without the pretraining.</p></li> <li><p>In order to make the model converge faster and reduce training time, I use a customised balancing sampler, so that during each epoch the numbers of positive and negative pairs are the same.</p></li> </ul> <p><strong>Final thoughts</strong><br> It looks like my pipeline is close to the <a href="https://www.kaggle.com/competitions/learning-equality-curriculum-recommendations/discussion/394838" target="_blank">3rd solution</a> except that they used top100 neighbours and I only used top10 instead. I would say this is a rather simple pipeline. Their success indicates that big models and rich resources are able to create a simple yet very performant model !</p>
Learning Equality - Curriculum Recommendations
28th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Learning Equality - Curriculum Recommendations <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to the organizers for this interesting competition! I joined one month before the competition ended and am happy that a simple and direct approach got me to 28th place.</p> <h1>Stage 1 retriever: Biencoder</h1> <p>Teach model to embed related topics and contents closely together, using the following common setup:</p> <ul> <li>Embed topic and content using same transformer backbone and mean pooling</li> <li>Compute cosine similarity between all examples in a batch</li> <li>Multiple in-batch negative ranking loss with temperature 0.05</li> </ul> <p>The following modifications gave a significant boost to the model:</p> <ul> <li>Modify sampler such that all examples in a batch share the same language. This makes the negative examples much more meaningful. For inference, I only consider candidate contents with matching language.</li> <li>Compute cross-entropy loss row-wise and column-wise (with respect to topics and contents)</li> <li>Penalize scores for the correct class with a margin</li> </ul> <p>The best retriever used XLM-RoBERTa (large) as backbone and was trained for 7 epochs. On my CV, it achieves a recall@50 of 91%.</p> <h1>Stage 2: Crossencoder</h1> <p>For every topic generate 50 candidates using the biencoder. The crossencoder feeds a joint representation of topic + content through a transformer and does binary prediction whether they match or not. The stage 2 model was trained using the out-of-fold predictions of the stage 1 model as input.</p> <p>The only things that worked for me in improving model performance were oversampling the positive class and using differential learning rates.</p> <p>My best stage 2 model was a multilingual BERT trained for 12 epochs achieving 63.6% on my CV (64.3% and 68.3% on public and private LB respectively).</p> <p>Looking at its performance and predictions, I realized that the crossencoder was having a hard time predicting a match from the text representations alone and experimented with GBDT as stage 2 model, but was not able to come up with a better-performing model.</p> <h1>Input representation</h1> <p>Could not find a input representation that worked better than simple concatenation.</p> <ul> <li>Topic representation: Title. Parent title. … Root title. Topic description.</li> <li>Content representation: Title. Description. Text.</li> </ul> <p>Sequence length was 128 tokens for both models.</p> <h1>Cross validation</h1> <p>I used 5 folds split on topics for stage 1, and 3 folds split on non-source topics for stage 2. I had good correlation between CV and LB using this setup. Splitting on channels instead led to unstable CV for me. Probing the public LB revealed that in fact there was a great amount (over 40%) of near-duplicate topics, so I settled in favor of a simple CV setup that gave me good correlation with LB.</p> <p>My evaluation metrics for stage 1 were average precision and recall@N, and directly the competition metric for stage 2.</p> <h1>Post processing</h1> <ul> <li>Topics with no predictions get assigned the nearest neighbor according to biencoder</li> <li>If a new topic shared the same title, parent title, and grandparent title as a seen topic, add the contents of the seen topic to the predictions (tiny boost).</li> </ul>
Learning Equality - Curriculum Recommendations
2nd Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Learning Equality - Curriculum Recommendations <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to Kaggle and the hosts of this competition, in particular to Jamie Alexandre for all of his helpful comments to all asked questions in the discussions and his great Data Exploration Notebook. Especially the code for traversing the topic tree was very helpful in the beginning.</p> <p>I spend quite some time during Christmas vacations on this competition and then did a longer break of over one month. So, in the end the finish was quite intense and submissions for Leaderboard and Efficiency Prize at the same time felled like participating in two different Competitions and quite to less Submissions for only one week left. </p> <h1>Solution: Single Stage Retrieval based on Cosine Similarity</h1> <p>My solution is based on a simple one stage approach to create <strong>embeddings</strong> for <strong>Topic</strong> and <strong>Content</strong> and calculation the cosine similarity for the retrieval task, without any further post-processing or second stage reranking. I will describe all the pieces of the puzzle to achieve high scores with such a relatively simple model.</p> <h2>Validation Split:</h2> <p><strong>10 Fold CV</strong> split based on minimizing the overlap of content relations between different folds. In this competition we have to split the topics into folds and all topics can have multiple attached content to it. Throughout this is a <strong>n x m</strong> relation of <strong>Topic x Content</strong>, it is difficult to create perfect splits that are aligned with the leaderboard.</p> <p>During CV-Split creation I tried to minimize the overlap of having the same content in different Folds, by creating 10 buckets and add topics to the bucket were all attached content of that topic creates the least overlap having the same attached content in different buckets. At least for fold 0 of my 10 Folds the alignment to the public leaderboard was quiet well. Whereas my Fold 2 is closer to the private leaderboard as it seems. I only used Fold 0, 1, and 2 for offline validation and results were always correlated to the leaderboard so I don’t care if they are perfectly aligned towards the score.</p> <p>Here is one example trained for only 32 Epochs with nearly perfect alignment of fold 0 towards public lb. <code>Selected</code> means here the average count of selected content per language over all topics in that fold and the <code>F2</code>, <code>Precision</code> and <code>Recall</code> of that single model are displayed for all languages. </p> <pre><code>-------------------------[Model: sentence-transformers/LaBSE]-------------------------- ---------------------------------------[Epoch: 32]------------------------------------- Epoch: 32, Train Loss = 1.064, Lr = 0.000050 ----------------------------------[margin: th 0.160]----------------------------------- Calculate Scores en Score: 0.65911 - Precision: 0.59954 - Recall: 0.739 (2806x65939) - selected: 7 es Score: 0.71289 - Precision: 0.61102 - Recall: 0.838 (1177x30844) - selected: 6 pt Score: 0.78237 - Precision: 0.69215 - Recall: 0.862 (343x10435) - selected: 7 ar Score: 0.51809 - Precision: 0.46233 - Recall: 0.663 (318x7418) - selected: 7 fr Score: 0.59613 - Precision: 0.59448 - Recall: 0.650 (304x10682) - selected: 9 bg Score: 0.68063 - Precision: 0.60765 - Recall: 0.751 (242x6050) - selected: 8 bn Score: 0.15228 - Precision: 0.09667 - Recall: 0.211 (237x2513) - selected: 9 sw Score: 0.69321 - Precision: 0.64327 - Recall: 0.763 (209x1447) - selected: 6 gu Score: 0.76149 - Precision: 0.66632 - Recall: 0.834 (181x3677) - selected: 6 hi Score: 0.63803 - Precision: 0.58325 - Recall: 0.744 (138x4042) - selected: 9 it Score: 0.87791 - Precision: 0.85495 - Recall: 0.906 (73x1300) - selected: 4 zh Score: 0.63350 - Precision: 0.54224 - Recall: 0.740 (68x3849) - selected: 10 mr Score: 0.69542 - Precision: 0.57128 - Recall: 0.898 (24x999) - selected: 12 fil Score: 0.72123 - Precision: 0.68860 - Recall: 0.778 (23x516) - selected: 7 as Score: 0.58904 - Precision: 0.53932 - Recall: 0.644 (13x641) - selected: 5 my Score: 0.71483 - Precision: 0.71825 - Recall: 0.842 (12x206) - selected: 4 km Score: 0.91160 - Precision: 0.88671 - Recall: 0.942 (11x505) - selected: 5 kn Score: 0.63651 - Precision: 0.55926 - Recall: 0.722 (9x501) - selected: 9 te Score: 0.86664 - Precision: 0.73492 - Recall: 0.968 (7x285) - selected: 13 or Score: 0.81583 - Precision: 0.69889 - Recall: 0.900 (5x326) - selected: 11 ta Score: 0.76419 - Precision: 0.52095 - Recall: 0.967 (5x216) - selected: 6 ur Score: 0.40010 - Precision: 0.31326 - Recall: 0.586 (5x245) - selected: 17 pnb Score: 0.87594 - Precision: 0.83333 - Recall: 0.938 (4x184) - selected: 8 ru Score: 0.66330 - Precision: 0.63704 - Recall: 0.725 (3x188) - selected: 12 pl Score: 0.82159 - Precision: 0.99061 - Recall: 0.792 (3x319) - selected: 30 swa Score: 0.08696 - Precision: 0.08696 - Recall: 0.087 (3x495) - selected: 23 tr Score: 0.54843 - Precision: 0.87778 - Recall: 0.518 (3x225) - selected: 9 --------------------------------------------------------------------------------------- CV Score: 0.65414 - Precision: 0.58710 - Recall: 0.743 --------------------------------------------------------------------------------------- Public Score: 0.65069 Privat Score: 0.69607 --------------------------------------------------------------------------------------- </code></pre> <h2>Input Data:</h2> <p>No use of special Token for separation instead, I use the <code>#</code> as seperator.</p> <h3><code>Topic:</code></h3> <p><strong>Title <code>#</code> Topic-Tree <code>#</code> Description</strong><br> The Topic Tree is reverse ordered and the same separator <code>#</code> is used so we end up with:<br> <strong>Title <code>#</code> Parent <code>#</code> Grandparent <code>#</code> … <code>#</code> Description</strong></p> <h3><code>Content:</code></h3> <p><strong>Title <code>#</code> Description <code>#</code> Text</strong> (cut to 32 based on white space splitting) <br> If for example <strong>Description</strong> is empty the model will see as input: <strong>Title <code>#</code> <code>#</code> Text</strong></p> <h2>Training:</h2> <p>I used only <strong>Transformer Base Models</strong> with a <strong>max. sequence length</strong> of <strong>96</strong> tokens for both topic and content. </p> <p>The first and maybe also the most important part of the puzzle is the use of the <strong>InfoNCE Loss</strong> as symmetric contrastive loss-function. Cause we have here a <strong>n x m</strong> matching problem it is tricky when using this loss function due to high intersection of topics and content. </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2298475%2Fae42b8e146dbdf9a46ac01efd8662b29%2FLoss.jpg?generation=1678920767351499&amp;alt=media" alt=""></p> <p>As can be seen in the above visualization we want to have matches only on the diagonal of the similarity matrix during loss calculation, cause when using Cross-Entropy in both directions <strong>Topic-&gt;Content</strong> and <strong>Content-&gt;Topic</strong> the labels are just:</p> <p><strong>labels = torch.arange(len(logits))</strong></p> <p>But if we have two topics in the same batch that might share the same content we end up with high similarities besides the diagonal of the matrix whereas the label only says the high similarity on the diagonal are a correct match. On possibility to circumvent this problem is using label smoothing for the cross-entropy loss. But, having topics with related content in the same batch is simply noise for the model and should be avoided. </p> <p>The simplest way in Pytorch when using DP and not DDP is to write your own custom shuffle function and set shuffling in the data-loader to false, what means noting else than using a sequential sampler. </p> <p>My own custom shuffle function simply calculates before each epoch the composition of the batches and avoids sampling topics with related content in the same batch. Additionally after each epoch predictions for the whole training data are calculated to detect for each topic content that we miss and content that would be incorrectly assigned to that topic. </p> <ul> <li><p><strong>Missing Content</strong> is stored in a list and the specific pair (topic, content) gets oversampled during shuffling. </p></li> <li><p><strong>Wrong Content</strong> retrieved for a topic is more difficult to solve. Let’s say we have topic <code>t1</code> with the wrong content <code>c1</code> for that topic with a currently high similarity to that content. Now what we need is another pair <code>(t2, c1)</code> of our ground truth in the same batch, to push away <code>c1</code> from <code>t1</code> cause when using the InfoNCE loss all other <strong>N-1</strong> contents in the batch are negatives for that sample. This sampling strategy is highly effective cause we increase the margin to all negative picks during the next epoch. Or at least we try, because if adding <code>t2</code> would lead to a conflict based on related content (noise) it is rejected for adding in the same batch. I do this for a specific topic up to a max. number of 128 hard negatives for that sample, by a training batchsize &gt; 768.</p></li> </ul> <p>With that shuffle/sampling strategy we end up with batches without conflicts in related content or ambiguities, so we have only a correct match on the diagonal line and lots of near but incorrect content (hard samples) in the batch. </p> <h2>Language Switching:</h2> <p>I translated the most common languages into each other for additional training data. But just adding this data to our training would be a disaster when using InfoNCE, cause if we translate for example an topic and content item from English into French and a quite similar topic and content already exists in the original French content we will end up again with noise during our loss calculation. So what I did is a simple switching strategy after each epoch, shown in the next picture.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2298475%2F769a29fb044a278ed4a7e0f2233ab9b4%2FLanguage_Switching.jpg?generation=1678920800049688&amp;alt=media" alt=""></p> <p>So we can use the <code>correlation.csv</code> without having any trouble of ambiguities of translated to original content during loss calculation. Of course translation is never perfect and this alone creates some noise and further more we change the distribution of the training data. That’s the reason a switch is only used every second epoch and only between the languages en, es, pt and fr.</p> <p>Using language switching seems to bring a score boost of up to <strong>0.01 – 0.02</strong> what is not that much as expected maybe due to the noise this introduces during training. </p> <h2>Knowledge Distillation (Efficiency Prize):</h2> <p>Nothing special just used my pre-trained models as teacher, drop half of the transformer layers and train a second time using just the MSE-Loss. Weights of the student are initialized with the weights of the pre-trained teacher model. Distillation leads only to a slightly drop in performance when using 6 Layers, so the sweet spot for me was not dropping more layers, but of course this is always a trade of speed vs. accuracy.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2298475%2F64937f95b3f22b5b05a8077ffbcfe22c%2FDistillation.jpg?generation=1680006425442111&amp;alt=media" alt=""></p> <h2>Quantization + JIT-Trace (Efficiency Prize):</h2> <p>I used Torch Post Training Dynamic Quantization what is far from optimal. Unfortunately, there is no pre-installed huggingface optimum in the CPU kernel so using Intel Neural Compressor or OpenVINO needs offline installation, and if every second counts it makes not much sense to waste time with installation of additional packages. Maybe should have tested the ONNX runtime but run out of time. </p> <p>Unfortunately FX Graph Mode Quantization of Pytorch did not worked out for me with the huggingface models, otherwise I would have tested quantization aware training for better results. </p> <p>I ended up with just using <code>Eager Mode Quantization -&gt; Post Training Dynamic Quantization</code> and compiling into a jit-traced model. This leads to a performance drop of around <strong>0.01 – 0.02</strong> on my cv-scores but increases the throughput and lowers the execution time. </p> <p>If using <code>qint8</code> also on the Feed Forward part of the transformer on the intermediate up sample and output layer, the score drop is even higher so I ended up in only using <code>qint8</code> on the attention layer.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2298475%2Fb4dad57bf8ce6bc7f1649fde336b09d4%2FQuantization.jpg?generation=1678920872220419&amp;alt=media" alt=""></p> <h2>Optimal Threshold:</h2> <p>Last but not least but the last part of the puzzle is using an dynamic threshold instead of a static one. I can not say if this would work also for other models but I use this calculation after each epoch to find the hard negatives for the next epoch and also during inference and it gives me a boost of <strong>0.02</strong> in Score or even higher. The basic idea is described in the following picture. </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2298475%2Fc28d94cc231c374cb8f198411fba47fa%2Franking.jpg?generation=1678920889907622&amp;alt=media" alt=""></p> <p>The biggest advantage was, that this is much more stable than finding an optimal static threshold. Every model I trained before leads to a different optimal static threshold. With using this dynamic calculation the results was always the same and values for margin were always between <code>[0.14 : 0.18]</code> on different folds. Using a higher margin leads to better recall by losing precision and because of using the F2 Score, perhaps I should have tried to submit higher values as well. During training I use <strong>0.16</strong> as margin, for my best submission <strong>0.18</strong> is slightly better, but the difference is not that high. </p> <h2>Results:</h2> <p>Cause the contrastive training seems to be rock stable and showing absolutely no signs toward overfitting I trained all those models for <strong>40 epochs</strong> and just used the final checkpoint. My experiments on fold 0-2 shows always, that no matter what checkpoint the difference in the last epochs to the best epoch is on the third digit, so it is save to train on the whole training data without any validation holdout. </p> <ul> <li>For Efficiency Prize I used an ensemble of only two models, cause a second model leads to the biggest jump in scores by only increasing the runtime to <strong>23 minutes</strong>.</li> <li>For the Leaderboard I am using an <strong>ensemble of 5 models</strong> with a runtime of only <strong>9 minutes</strong> on the <strong>P100 Instance</strong>, what is also quite fast. </li> </ul> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2298475%2Fbe610c5b026a4f4106c248ef5e1d5f83%2Fresults.jpg?generation=1678920911677359&amp;alt=media" alt=""></p> <h2>What did not work out:</h2> <ul> <li>Using different margins for each language seems to lead to overfitting with worse results on the leaderboard.</li> <li>Cross-Encoder as second stage. As others also discovered, if the retrieval model becomes better, it is more likely that the second stage becomes useless or even leads to errors and overfitting. </li> <li>Using T4 instance. Don’t get me wrong, it is great that Kaggle increased for CPU Instance the RAM limit and introduced the 2xT4 option but I struggled with the RAM limitations on that Instance if using Pytorch DP so I switch back to P100 were I had not such problems. </li> <li>Training monolingual only on English. Even with my increased training samples due to translation of all Spanish, Portuguese and French Topics and Content into English, it does not show better results when using for example an <code>all-mpnet-base-v2</code>, what is monolingual, instead of using all data and train a multilingual model. </li> </ul> <h2>Code:</h2> <ul> <li><a href="https://github.com/KonradHabel/learning_equality" target="_blank">Training (GitHub)</a></li> <li><a href="https://www.kaggle.com/code/khabel/2nd-place-learning-equality-leaderboard-prize" target="_blank">Inference: Leaderboard Prize (Kaggle)</a></li> <li><a href="https://www.kaggle.com/code/khabel/2nd-place-learning-equality-efficiency-prize" target="_blank">Inference: Efficiency Prize (Kaggle)</a></li> </ul> <h2>Data:</h2> <ul> <li><a href="https://www.kaggle.com/datasets/khabel/learning-equality-language-switch" target="_blank">Translated Data</a></li> </ul> <h2>Checkpoints:</h2> <ul> <li>Checkpoints trained on all training data are available in the inference notebooks <a href="https://www.kaggle.com/code/khabel/2nd-place-learning-equality-leaderboard-prize/input" target="_blank">[Link]</a>, <a href="https://www.kaggle.com/code/khabel/2nd-place-learning-equality-efficiency-prize/input" target="_blank">[Link]</a></li> <li>Checkpoints trained on fold 0 for offline eval <a href="https://drive.google.com/drive/folders/102N-wVRLhzf9d5IL3r1HFdYCsCZqcNQF?usp=share_link" target="_blank">[Link]</a></li> </ul> <p>Thank you very much and congrats to your gold zone finishing and even more for becoming Kaggle Competition Grandmaster. </p> <p>A quantitative example when I switch from <em>static -&gt; dynamic</em> thresholding:</p> <p><strong>Model:</strong> 'sentence-transformers/paraphrase-multilingual-mpnet-base-v2'</p> <p><strong>Static Threshold:</strong> <code>th=0.60</code><br> <strong>Public:</strong> 0.63509<br> <strong>Privat:</strong> 0.67885</p> <p><strong>Dyn. Threshold:</strong> <code>margin=0.16</code><br> <strong>Public:</strong> 0.65236<br> <strong>Privat:</strong> 0.69374</p> <p>So at least for my models that seems to work pretty well. </p> <p>Thank you. Your race on that challenge after finishing <em>Player Contact Detection</em> in the prize zone was incredible. And congratulations also on your 3rd place in the <em>Efficiency Prize</em>, that's awesome in such a short time.</p> <p>Thanks to you, I will have look at.</p> <p>Thank you <a href="https://www.kaggle.com/ohanegby" target="_blank">@ohanegby</a> and my congratulations to you for finishing on 10th place and also becoming a Kaggle Competitions Master 😊. </p> <p>I tried <strong>Multiple Negative Ranking Loss</strong> also right in the beginning of the competition, but nearly instantly switcht to <strong>InfoNCE</strong>, cause even on a single <em>RTX 3090</em> my batch size was high enough to have enough negative pairs with a symmetric loss. But for sure the loss direction <strong>query-&gt;reference</strong> in our case <strong>topic-&gt;content</strong> is more important than the other direction.</p> <p>For translation I tested different approaches. Best offline translation with on huggingface available models are in my opinion the <strong><a href="https://huggingface.co/docs/transformers/model_doc/marian" target="_blank">MarianMT</a></strong> models. Multilingual models were even worse . But in the end, I did a simple export in an <em>.xlsx</em> file where first column is a numerical ID and second column is what I want to translate and uploaded this to google translate -&gt; document translation. The advantage is, this is much faster and generates more reliable results than MarianMT models, at least from my perspective. </p> <p>Thank you for sharing your insights, glad to see that this works for others as well.</p>
Learning Equality - Curriculum Recommendations
31st Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Learning Equality - Curriculum Recommendations <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, many thanks to the organizing team for the competition! I am just starting to focus on NLP domain and this competition was very educational for me, it also gave me my first solo silver 😀. It included a recommender-system problem and allowed me to practice text-embeddings. But I have reached my limits in creativity towards the end of the competition and tried unnecessary, overkill stuff. I will try to summarize my solution briefly. You can see the whole source-code from my <a href="https://github.com/nlztrk/Learning-Equality-Curriculum-Recommendations" target="_blank">GitHub here</a>. You can also see the inference code of my submission from <a href="https://www.kaggle.com/code/nlztrk/lecr-st-cross-encoders-cv-0-6461-50-pp" target="_blank">here</a>.</p> <h2>Solution Summary</h2> <p><img src="https://raw.githubusercontent.com/UKPLab/sentence-transformers/master/docs/img/InformationRetrieval.png" alt="architecture"></p> <p>I used <a href="https://github.com/UKPLab/sentence-transformers" target="_blank">sentence-transformers</a> library and the models from <a href="https://huggingface.co/" target="_blank">HuggingFace</a>. I tried to implement the shared architecture <a href="https://www.sbert.net/examples/applications/retrieve_rerank/README.html" target="_blank">here</a>.</p> <p>The pipeline consists of:</p> <ul> <li><a href="https://github.com/nlztrk/Learning-Equality-Curriculum-Recommendations/blob/main/0.%20Generating%20Splits.ipynb" target="_blank">Splitting the Data as Train/Val</a></li> <li><a href="https://github.com/nlztrk/Learning-Equality-Curriculum-Recommendations/blob/main/utils/unsupervised_utils.py#L154" target="_blank">Text Processing</a></li> <li><a href="https://github.com/nlztrk/Learning-Equality-Curriculum-Recommendations/blob/main/1.%20Unsupervised%20Training.py" target="_blank">Training Sentence-Transformer (Stage 1)</a></li> <li><a href="https://github.com/nlztrk/Learning-Equality-Curriculum-Recommendations/blob/main/2.%20Unsupervised%20Sampling.ipynb" target="_blank">Retrieve with kNN using Stage 1 Embeddings</a></li> <li><a href="https://github.com/nlztrk/Learning-Equality-Curriculum-Recommendations/blob/main/3.%20Supervised%20Training.py" target="_blank">Training Cross-Encoder (Stage 2)</a></li> <li><a href="https://github.com/nlztrk/Learning-Equality-Curriculum-Recommendations/blob/main/4.%20Inference.ipynb" target="_blank">Inference</a></li> </ul> <h3>Splitting the Data as Train/Val</h3> <p>I've seen a lot of different approaches on the forum. I also wanted to use the imbalance in language distribution in my approach. I set all the data coming from <strong>source</strong> as <strong>train</strong>. For the remaining, I used:</p> <ul> <li><strong>CV Scheme:</strong> Grouped Stratified K-Fold</li> <li><strong>Folds:</strong> 5 (Used only the first)</li> <li><strong>Group:</strong> Topic ID</li> <li><strong>Stratifier Label:</strong> Language</li> </ul> <h3>Text Processing</h3> <ul> <li>Created topic tree</li> <li>Created special tokens for each value <strong>language</strong> and <strong>content kind</strong> can take.</li> <li>Created identifier separators for <strong>topic title</strong>, <strong>topic tree</strong>, <strong>topic description</strong>, <strong>content title</strong>, <strong>content description</strong> and <strong>content text</strong>.</li> </ul> <p>My final input for the model was like:</p> <ul> <li><strong>Topic:</strong> <code>[&lt;[language_en]&gt;] [&lt;[topic_title]&gt;] videos [&lt;[topic_tree]&gt;] maths g3 to g10 &gt; maths &gt; g6 &gt; 17. geometrical constructions &gt; perpendicular and perpendicular bisector &gt; videos [&lt;[topic_desc]&gt;] nan</code></li> <li><strong>Content:</strong> <code>[&lt;[language_en]&gt;] [&lt;[kind_exercise]&gt;] [&lt;[cntnt_title]&gt;] level 3: identify elements of simple machine(axle,wheel,pulley and inclined plane etc [&lt;[cntnt_desc]&gt;] nan [&lt;[cntnt_text]&gt;] nan</code></li> </ul> <h3>Training Sentence-Transformer (Stage 1)</h3> <ul> <li><strong>Base Model:</strong> <a href="https://huggingface.co/AIDA-UPM/mstsb-paraphrase-multilingual-mpnet-base-v2" target="_blank">AIDA-UPM/mstsb-paraphrase-multilingual-mpnet-base-v2</a></li> <li><strong>Sequence Length:</strong> 128</li> <li><strong>Epochs:</strong> 50</li> <li><strong>Batch Size:</strong> 128</li> <li><strong>Warm-Up Ratio:</strong> 0.03</li> </ul> <h3>Retrieve with kNN using Stage 1 Embeddings</h3> <p>I used <strong>kNN</strong> from <a href="https://rapids.ai/" target="_blank">RAPIDS</a> and get closest <strong>100</strong> content embedding for each topic embedding using <strong>cosine-similarity</strong>.</p> <h3>Training Cross-Encoder (Stage 2)</h3> <ul> <li><strong>Base Model:</strong> Trained model from Stage 1</li> <li><strong>Output:</strong> Sigmoid</li> <li><strong>Sequence Length:</strong> 128</li> <li><strong>Epochs:</strong> 15</li> <li><strong>Batch Size:</strong> 256</li> <li><strong>Warm-Up Ratio:</strong> 0.05</li> </ul> <h3>Inference</h3> <ul> <li>Ran all the steps above sequentially in a single script.</li> <li>Tuned classification threshold on the hold-out validation set to maximize F2-Score.</li> <li>Imputed empty topic rows with the highest scoring content IDs.</li> </ul> <h3>Didn't Work &amp; Improve</h3> <ul> <li>Language specific kNN</li> <li>Smaller models</li> <li>Lower sequence length</li> <li>Lower batch-size</li> <li>Union submission blending</li> </ul>
Learning Equality - Curriculum Recommendations
32th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Learning Equality - Curriculum Recommendations <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you to the organizers for the fun competition and everyone who participated.<br> I share my solution.</p> <h1>Summary</h1> <ul> <li>2-stage: Retrieval (Bi-Encoder) and Re-Ranker (Cross-Encoder)</li> <li>pipeline: below</li> </ul> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F164129%2F2a6b8a5b577eb4c98cc362c38fb4fcaf%2Fpipeline.png?generation=1678889845536827&amp;alt=media" alt=""></p> <h1>1. Retrieval</h1> <ul> <li>input data:<ul> <li>topic: <code>title + description + [SEP-Depth] + level + [SEP-context] + context + [SEP-children] + children</code></li> <li>context: <code>title + description + text + [SEP-Kind] + kind</code></li></ul></li> <li>split train/valid: StratifiedGroupKFold (y=channel, group=topic_id) only 1fold</li> <li>model:<ul> <li>Bi-Encoder (Sentence-Transformers)</li> <li>loss: NT-Xent loss (<a href="https://arxiv.org/pdf/2002.05709.pdf" target="_blank">https://arxiv.org/pdf/2002.05709.pdf</a>)</li> <li>pretrained-model<ul> <li>(1) xlm-roberta-base</li> <li>(2) sentence-transformers/paraphrase-multilingual-mpnet-base-v2</li></ul></li> <li>tokenizer: add special token ([SEP-Depth] etc.)</li> <li>batch_size: 256, max_len=128</li></ul></li> </ul> <table> <thead> <tr> <th>pretrained-model</th> <th>training data</th> <th>Rec@10</th> <th>Rec@50</th> <th>f2@10</th> <th>pub@10</th> <th>pri@10</th> </tr> </thead> <tbody> <tr> <td>xlm-roberta-base</td> <td>train</td> <td>76.8</td> <td>91.1</td> <td>50.3</td> <td>46.9</td> <td>46.9</td> </tr> <tr> <td>paraphrase-multilingual-mpnet-base-v2</td> <td>train</td> <td>78.5</td> <td>91.5</td> <td>51.5</td> <td>47.2</td> <td>47.4</td> </tr> <tr> <td>paraphrase-multilingual-mpnet-base-v2</td> <td>train+valid</td> <td>93.3</td> <td>99.0</td> <td>62.1</td> <td>48.9</td> <td>49.5</td> </tr> </tbody> </table> <h1>2. Select Candidate</h1> <ul> <li>compute embedding vector by model, and calculate cosine-similarity between all topics and all contents </li> <li>select top50 by cosine-similarity per model-&gt; select duplicate candidates</li> <li>top10 : public=53.4, private=55.4</li> </ul> <h1>3. Re-Ranker</h1> <ul> <li>input data: </li> </ul> <pre><code> title + description + [SEP-Depth] + level + [SEP-context] + context + \ [SEP-children] + children + [SEP] + \ title + description + text + [SEP-Kind] + kind </code></pre> <ul> <li>split train/valid: same as stage 1</li> <li>model:<ul> <li>Cross-Encoder</li> <li>loss: BCE loss</li> <li>adversarial-learning: FGM</li> <li>batch_size: 128, max_len=256</li></ul></li> <li>thres: 0.1</li> </ul> <table> <thead> <tr> <th>#</th> <th>model</th> <th>training data</th> <th>local</th> <th>public</th> <th>private</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>xlm-roberta-base</td> <td>train</td> <td>67.2</td> <td>61.3</td> <td>64.1</td> </tr> <tr> <td>2</td> <td>paraphrase-multilingual-mpnet-base-v2</td> <td>train</td> <td>67.6</td> <td>61.8</td> <td>64.7</td> </tr> <tr> <td>3</td> <td>paraphrase-multilingual-mpnet-base-v2</td> <td>train+valid</td> <td>68.0</td> <td>63.4</td> <td>66.3</td> </tr> <tr> <td>final</td> <td>ensemble (weight=1:1:3)</td> <td>-</td> <td>69.3</td> <td>64.4</td> <td>67.8</td> </tr> </tbody> </table> <h1>Did't work</h1> <ul> <li>define graph data by topic's structure of curriculum, and train GNN (Link Prediction). But didn't work.</li> <li>use LightGBM in stage 2 (But higher team was using it, so my method was bad…)</li> </ul> <p>Thank you for reading.</p>
Learning Equality - Curriculum Recommendations
39th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Learning Equality - Curriculum Recommendations <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to Kaggle and The Learning Agency Lab for this exciting competition, and thanks to kagglers and my great teammate <a href="https://www.kaggle.com/youjods" target="_blank">@youjods</a> .</p> <h1>Summary</h1> <p>Our model is a 2-stage configuration(Retriever and Reranker).<br> We used models downloaded from <a href="https://huggingface.co/" target="_blank">huggingface</a> and trained with the <a href="https://www.sbert.net/" target="_blank">sentence-transformers</a> library.<br> The following points contributed significantly to the score:</p> <ul> <li>Using sentence-transformers/xlm-r-distilroberta-base-paraphrase-v1 for backbone model</li> <li>Using the Retriever model as a backbone of Reranker training</li> <li>Using OnlineContrastiveLoss for Reranker</li> </ul> <h1>Codes</h1> <ul> <li><a href="https://github.com/calpis10000/kaggle-lecr" target="_blank">Training</a></li> <li><a href="https://www.kaggle.com/code/calpis10000/lecr-calpis-exp037" target="_blank">Inference</a></li> </ul> <h1>CV Strategy</h1> <p>We used GroupKGold, which is handled differently depending on category.</p> <ul> <li>category=='source' topics were all used for training.</li> <li>The other categories are divided by GroupKFold keyed by channel, and the following data are used for validation.<ul> <li>1 fold topics (as unknown channel topics for train data)</li> <li>Other fold topics sampled same number of above fold (as known channel topics for train data)</li></ul></li> </ul> <p>Valid-scores were calculated for known and unknown channels, respectively.</p> <h1>preprocess</h1> <p>We referred to <a href="https://www.kaggle.com/conjuring92" target="_blank">@conjuring92</a> 's discussion: <a href="https://www.kaggle.com/competitions/learning-equality-curriculum-recommendations/discussion/376873" target="_blank">Topic Context Matters in Supervised Pipeline</a></p> <p>topics: channel + language + level + title + description + context(title) + context(description) + children_title<br> content: kind + language + title + description+ text </p> <p>We have cut the title and discussion to some length. The cut length differs between Retriever and Reranker. (Reranker is shorter)</p> <h1>Stage1: Retriever</h1> <p>We trained models from huggingface using sentence-transformer library, and we used MultipleNegativesRankingLoss.<br> We have tried various backbone models and the following conditions produced the best recall score.</p> <ul> <li>backbone: sentence-transformers/xlm-r-distilroberta-base-paraphrase-v1</li> <li>epoch: 20</li> <li>batch_size: 128</li> <li>lr: 2e-5</li> </ul> <p>Recall@100 score resulted in:</p> <ul> <li>0.8745 for whole valid-data</li> <li>0.93929 for known channel</li> <li>0.80972 for unknown channel</li> </ul> <h1>Stage2: Reranker</h1> <p>In stage2, top100 nearest contents were extracted for every topic using Reranker model.<br> Then we finetuned Retriever model using sentence-transformer library with OnlineContrastiveLoss.<br> We first trained with simple binary-classification, but OnlineContrastiveLoss boosted the f2-score as following:</p> <ul> <li>binary-classification: CV 0.4565, LB: 0.553</li> <li>OnlineContrastiveLoss: CV 0.5414, LB: 0.619</li> </ul> <h1>Not worked</h1> <ul> <li>Ensemble<ul> <li>ensemble improved our validation-score, but worsed LB-score.</li></ul></li> <li>Other pretrained model (e.g. sentence-transformers/all-MiniLM-L12-v2)</li> <li>LightGBM Reranker</li> </ul>
Learning Equality - Curriculum Recommendations
3rd solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Learning Equality - Curriculum Recommendations <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks. </p> <p>The stage1 train code we used are modified from <a href="https://github.com/yangjianxin1/SimCSE" target="_blank">https://github.com/yangjianxin1/SimCSE</a><br> The stage2 train code basically from <a href="https://www.kaggle.com/code/ragnar123/lecr-xlm-roberta-base-baseline" target="_blank">https://www.kaggle.com/code/ragnar123/lecr-xlm-roberta-base-baseline</a></p> <p>update: added briefly training code of stage1 simcse model. </p> <p>Congrats for your solo gold, I also learned a lot from your writeup!</p> <p>mdeberta about 3-4 days on A100 because fp16 not working on mdeberta, xlmr base about 1 day and large about 2-3 days.</p> <p>First of all, I would like to thank the organizers for hosting this high-quality competion, and my awesome teammates <a href="https://www.kaggle.com/xiamaozi11" target="_blank">@xiamaozi11</a> <a href="https://www.kaggle.com/syzong" target="_blank">@syzong</a> <a href="https://www.kaggle.com/sayoulala" target="_blank">@sayoulala</a> <a href="https://www.kaggle.com/yzheng21" target="_blank">@yzheng21</a>, we all worked hard for this competion. And I learned a lot from the great notebooks and discussions, basically all the methods we used are from the kaggle community. Thanks to these generous and smart kagglers!</p> <ul> <li>tips from hosts: <a href="https://www.kaggle.com/code/jamiealexandre/tips-and-recommendations-from-hosts" target="_blank">https://www.kaggle.com/code/jamiealexandre/tips-and-recommendations-from-hosts</a></li> <li>text pre-processing: <a href="https://www.kaggle.com/competitions/learning-equality-curriculum-recommendations/discussion/376873" target="_blank">https://www.kaggle.com/competitions/learning-equality-curriculum-recommendations/discussion/376873</a></li> <li>stage1 and stage2 train and submit pipeline: <a href="https://www.kaggle.com/competitions/learning-equality-curriculum-recommendations/discussion/373640" target="_blank">https://www.kaggle.com/competitions/learning-equality-curriculum-recommendations/discussion/373640</a></li> <li>stage1 and stage2 modeling: <a href="https://www.kaggle.com/competitions/learning-equality-curriculum-recommendations/discussion/381509" target="_blank">https://www.kaggle.com/competitions/learning-equality-curriculum-recommendations/discussion/381509</a>, <a href="https://www.kaggle.com/code/ragnar123/lecr-xlm-roberta-base-baseline" target="_blank">https://www.kaggle.com/code/ragnar123/lecr-xlm-roberta-base-baseline</a></li> </ul> <h3>Summary</h3> <ul> <li>CV strategy</li> <li>Stage1: Retriever</li> <li>Stage2: Ranker</li> <li>Finding threshold</li> <li>Post-Processing</li> <li>Ensemble</li> </ul> <h3>training pipeline</h3> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1197382%2F47f76a52a49058f4f234858cfba9969f%2F1_training_pipeline.png?generation=1678844482527465&amp;alt=media" alt=""></p> <h3>CV strategy</h3> <p>We only used 4,000 random topics which <strong>category != 'source'</strong> as hold out data. Those topics were as the validation data and never used in any training process. This simple CV strategy was unexpectedly stable. </p> <p>In the last month of the competition, we changed 4,000 topics to 1,000, which was still relatively consistent until we started to ensemble.</p> <h3>Retriever</h3> <p>We used unsupervised SIMCSE (Simple Contrastive Learning of Sentence Embeddings: <a href="https://github.com/princeton-nlp/SimCSE" target="_blank">https://github.com/princeton-nlp/SimCSE</a>) method for training retriever models.</p> <h4>training retriever</h4> <ul> <li>only used positive samples from correlations.csv for unsupervised simcse training</li> <li>random choice 100 negative samples per validation topic from same language, for validation set </li> <li>content text format: <code>title [SEP] kind [SEP] description [SED] text</code>, maxlen = 256 (string level)</li> <li>topic text format: <code>title [SEP] channel [SEP] category [SEP] level [SEP] language [SEP] description [SEP] context [SEP] parent_description [SEP] children_description</code>, maxlen = 256 (string level)</li> <li><code>simcse_unsup_loss</code></li> </ul> <blockquote> <pre><code>def simcse_unsup_loss(feature_topic, feature_content) -&gt; 'tensor': y_true = torch.arange(0, feature_topic.size(0), device=device) sim = F.cosine_similarity(feature_topic.unsqueeze(1), feature_content.unsqueeze(0), dim=2) sim = sim / 0.05 loss = F.cross_entropy(sim, y_true) loss = torch.mean(loss) return loss </code></pre> <p>from: <a href="https://github.com/yangjianxin1/SimCSE/blob/master/model.py" target="_blank">https://github.com/yangjianxin1/SimCSE/blob/master/model.py</a></p> </blockquote> <ul> <li>train code like:</li> </ul> <blockquote> <pre><code>for step, (inputs_topic, inputs_content, labels) in enumerate(train_loader): inputs_topic = collate(inputs_topic) for k, v in inputs_topic.items(): inputs_topic[k] = v.to(device) inputs_content = collate(inputs_content) for k, v in inputs_content.items(): inputs_content[k] = v.to(device) batch_size = labels.size(0) with torch.cuda.amp.autocast(enabled=CFG.apex): feature_topic = model(inputs_topic) feature_content = model(inputs_content) loss = simcse_unsup_loss(feature_topic, feature_content) </code></pre> </blockquote> <p>Performance on 1,000 topics validation data:</p> <table> <thead> <tr> <th>model</th> <th>F2@5</th> <th>max positive score top50</th> <th>max positive score top100</th> </tr> </thead> <tbody> <tr> <td>paraphrase-multilingual-mpnet-base-v2</td> <td>0.5250</td> <td>0.9135</td> <td>0.9443</td> </tr> <tr> <td>all-MiniLM-L6-v2</td> <td>0.4879</td> <td>0.9045</td> <td>0.9353</td> </tr> <tr> <td>mdeberta-v3-base</td> <td>0.4689</td> <td>0.8938</td> <td>0.9187</td> </tr> </tbody> </table> <h4>recall</h4> <p>We didn't use KNN to clusting, simply calculate cosine similarity for each topic and all content samples of the topic's language and then choose topN samples.</p> <p>We also tested the retriever ensemble (weighted cosine similarity). Although the max positive score top50 score has been improved to 0.9235, but there is basically no change on the LB score. So we only used a single retriever model (paraphrase-multilingual-mpnet-base-v2) in the final submit.</p> <h3>Ranker</h3> <p>The ranker in stage2 basically is a binary classification model.</p> <p>We used our best simcse finetuned model (paraphrase-multilingual-mpnet-base-v2) to infer on train set topics, calculate cosine similarity for each topic and all content samples of the topic's language and then choose top100 samples. We also added all positive samples from correlations.csv.</p> <p>texts were prepared as same as stage1. Pair format: <code>content [SEP] topic</code>, maxlen = 256 (token level)</p> <p>Hard negative samples from retriever model can greatly improve the performance of ranker models.</p> <table> <thead> <tr> <th>retrieve model (max positive score top100)</th> <th>ranker f2 score (LB)</th> </tr> </thead> <tbody> <tr> <td>0.80</td> <td>0.585</td> </tr> <tr> <td>0.94</td> <td>0.688</td> </tr> </tbody> </table> <p>We have used two model initialization methods. One is to directly load the model weight from huggingface, and the other is to load the model weight after simcse finetuning. The performance of the two methods is basically the same, while the latter is slightly higher and can converge faster.</p> <p>We also used FGM, EMA on training. FGM+EMA can impove score by 0.01. </p> <table> <thead> <tr> <th>model</th> <th>validation (1,000 topics)</th> <th>LB score</th> <th>PB score</th> </tr> </thead> <tbody> <tr> <td>mdeberta-v3-base (loading simcse weights)</td> <td>0.7149</td> <td>0.688</td> <td>0.727</td> </tr> <tr> <td>mdeberta-v3-base</td> <td>0.6378</td> <td>0.669</td> <td>0.693</td> </tr> <tr> <td>xlm-roberta-large (loading simcse weights)</td> <td>0.6987</td> <td>-</td> <td>-</td> </tr> <tr> <td>xlm-roberta-base (loading simcse weights)</td> <td>0.6780</td> <td>-</td> <td>-</td> </tr> <tr> <td>paraphrase-multilingual-mpnet-base-v2 (loading simcse weights)</td> <td>0.6299</td> <td>-</td> <td>-</td> </tr> </tbody> </table> <h3>Finding threshold</h3> <p>We set the threshold in loop to calculate the f2 metric on the 1,000 topics validation data, codes like:</p> <pre><code>best_thres = 0. best_score = 0. best_n_rec = 10 for thres in tqdm(np.arange(0.01, 0.2, 0.005)): for n_rec in range(30, 50): test_sub = test_data[test_data['score'] &gt;= thres].reset_index(drop=True) sub_df = test_sub.groupby('topic_id').apply(lambda g: g.head(n_rec)).reset_index(drop=True) score = calc_f2(sub_df, label_df) if score &gt; best_score: best_score = score best_thres = thres best_n_rec = n_rec </code></pre> <p>When submitting a single model, this method basically CV-LB consistency (CV is about 0.02-0.03 higher than LB).</p> <p>But When it came to the last two weeks of the competition, when started to ensemble, we lost the CV-LB consistency, and I think the reason may be that 1,000 topics validation data is not big enough.</p> <h3>Post-Processing</h3> <p>When dividing the threshold, we will have a small number of topics that do not match any contents. We just simply using top4 contents ranked by the original scores.</p> <p>We tried recalling more contents for this part of topics, but LB score didn't improve.</p> <p>We also tried different languages using different threshold, both CV and LB score dropped a little.</p> <h3>Ensemble</h3> <p>We had trained 20+ ranker models, trained on different number of recall samples per topic, like 50, 70, 100. </p> <ul> <li>mdeberta (simcse weights, 4,000 validate topics)</li> <li>mdeberta (simcse weights, 4,000 validate topics, with FGM,EMA)</li> <li>mdeberta (simcse weights, 1,000 validate topics)</li> <li>mdeberta (simcse weights, 1,000 validate topics, with FGM,EMA)</li> <li>mdeberta (1,000 validate topics, with FGM,EMA)</li> <li>xlm-roberta-large (simcse weights, 1,000 validate topics, with FGM,EMA)</li> <li>xlm-roberta-base (simcse weights, 1,000 validate topics, with FGM,EMA)</li> </ul> <p>We used LinearRegression to fit on 1,000 topics validation model output score to get coef_ array, and then used as blending weights:</p> <pre><code>pcols = [c for c in valid_data.columns if c.startswith('score')] for cols in tqdm([i for i in combinations(pcols, 10)]): cols = list(cols) X = valid_data[cols].values y = valid_data['label'].values lr = LinearRegression().fit(X, y) coef = lr.coef_ print(get_score(valid_data, df_target_metric, cols, coef)) </code></pre> <p>We started with 100 recall samples per topic, but due to time limits, we can only use up to 6 models. So we tried 70 and 50 recall samples in the later stage of the competition.</p> <table> <thead> <tr> <th>number of recall samples per topic</th> <th>models</th> <th>validation (1,000 topics)</th> <th>LB score</th> <th>PB score</th> </tr> </thead> <tbody> <tr> <td>100</td> <td>6</td> <td>0.725</td> <td>0.705</td> <td>0.738</td> </tr> <tr> <td>70</td> <td>10</td> <td>0.738</td> <td>0.715</td> <td>0.751</td> </tr> <tr> <td>50</td> <td>12</td> <td>0.743</td> <td>0.715</td> <td>0.751</td> </tr> </tbody> </table> <h3>Train Code:</h3> <p><a href="https://github.com/syzong/2023-Kaggle-LECR-Top3-TrainCode" target="_blank">https://github.com/syzong/2023-Kaggle-LECR-Top3-TrainCode</a></p> <p>unsupervised version, we only used positive samples, supervised version (add negative samples) we tried but not success. </p> <p>Thanks. Max pos score is computed for only holdnout set (1,000 topics).</p> <p>Thanks! Congrats for your solo gold!</p> <p>Thank you!</p> <p>update: sharing training codes: <a href="https://github.com/syzong/2023-Kaggle-LECR-Top3-TrainCode" target="_blank">https://github.com/syzong/2023-Kaggle-LECR-Top3-TrainCode</a></p> <p>FGM: fast gradient methods <br> EMA: exponential moving average</p> <p><a href="https://github.com/syzong/2023-Kaggle-LECR-Top3-TrainCode/blob/main/adv_utils.py" target="_blank">https://github.com/syzong/2023-Kaggle-LECR-Top3-TrainCode/blob/main/adv_utils.py</a></p> <p>Thanks. In Retriever part 256 is string length. Later in the ranker part I concatenate them together, then truncated with token length 256.</p> <p>OK, maybe you're right. This 'unsupervised' statement is indeed a bit confusing, but my main point is that we use the simcse unsupervised loss function (<a href="https://github.com/yangjianxin1/SimCSE/blob/master/model.py#L43)" target="_blank">https://github.com/yangjianxin1/SimCSE/blob/master/model.py#L43)</a>, so we call it 'unsupervised' in this circumstance.</p> <p>Thanks!</p> <ol> <li>Yes, we only used positive samples, no any negative sample in the retriever training.</li> <li>We used the trained retriever to get 100 recalling contents per topic (hard negative from here), and all the postive, to supply to ranker training.</li> </ol> <p>You can check our code here: <a href="https://github.com/syzong/2023-Kaggle-LECR-Top3-TrainCode" target="_blank">https://github.com/syzong/2023-Kaggle-LECR-Top3-TrainCode</a></p>
Learning Equality - Curriculum Recommendations
4th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Learning Equality - Curriculum Recommendations <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks a lot to Kaggle and the hosts, specifically <a href="https://www.kaggle.com/jamiealexandre" target="_blank">@jamiealexandre</a>, for hosting this competition. Unfortunately, we only could join after NFL competition finished, so we tried to do our best within an eleven days sprint, and we are very happy about the outcome. Improving every day can be quite thrilling - but at the same time we regret a bit not having more time for this competition as it was really interesting and fun participating here, and I believe our solution has not yet reached its ceiling. </p> <p>Due to time constraints, our solution is based only on an ensemble of embedding models with cosine similarity matching as well as additional post-processing.</p> <h3>Data processing</h3> <p>We only use the following features as text input for out models: language, category, title and description<br> An "issue" with the data is, that some topics and content items do not have a proper description, or very inconclusive titles. So it is helpful to supplement them.</p> <p>For topics, we replace the title with an inverse track of the category tree titles, so basically adding the breadcrumbs, we do the same for description. <br> For content, we concatenate the description and text fields as a single column.</p> <p>The benefit of this approach is, that it will add information to those records that have incomplete data. With tokenization and truncation, the models will prioritize the original information, if available, and otherwise use the supplemented information. So for instance, if a content item has description and text available, it will prioritize the description, but otherwise if description is missing, it will use the text.</p> <h3>Validation setup and models</h3> <p>Due to lack of time, we focused on a single holdout. For this, we split out non-source categories. Contents matching to these categories can then be either already be part of training, or completely unseen. We got strong correlation with this setup to public and private leaderboard in relative terms.</p> <p>For submissions, we always retrained models on full data. As we were running out of time, we could not always do both a validation and fullfit. So 3-4 days before end, we only did blind fullfits, and blended them on submission. And for local validation and testing post-processing we relied on some earlier models. While not ideal, it was a reasonable approach given the time constraints.</p> <h3>Embedding models</h3> <p>We only use ArcFace models. For input and training the models we use two different schemas:</p> <p><strong>Topic-based labeling.</strong> Here, a single label is defined as a topic, and all content items that match to this topic. So for instance: Label 1: Topic A, Content A, Content C, …</p> <p>This means that content items occur as many times as they match topics, and each topic is only a single sample. This approach is strongest on its own with the F2-based metric.</p> <p><strong>Content-based labeling.</strong> Here, a single label is defined as a content item, and a topic that matches to this content. So for instance: Label 1: Content A, Topic A</p> <p>This means that each topic-content pair is a single label. This method worked worse individually, but blended quite nicely with the topic-based approach.</p> <p>Our final blend contains 7 topic-based, and 2 content-based models. Backbones are mostly xlm-roberta-base, xlm-roberta-large, paraphrase-multilingual-mpnet-base-v2, or deberta-v3-large.</p> <h3>Post-processing</h3> <p>We played a lot with different post-processing techniques as this is always something that is useful in metric-learning matching. We optimize the treshold automatically in the kernel to a certain average number of matches per topic. Also, we found that penalizing the cosine similarities based on additional information helps. First, we slightly reduce similarity probabilities for content that only matches to a single topic in whole training. Second, we increase the probability of content items that are not available in training. We also always match new content to the top ranked topic, if above a certain threshold. Finally, we also additionally add new matches if we have less than five matches for a topic, but the additional probabilities are above a certain ratio to the higher ranked probabilities.</p> <h3>What did not work (due to time)</h3> <p>We spent 2-3 days trying to tune bi-encoder text models for second-stage, but could not get anything that improved our first-stage embedding models to be worth the additional runtime. So we decided to drop it and focus on first-stage only.</p> <p>Also, we spent some time on trying to tune LGB second-stage models. We were quite sure that they should be working well, and might replace also some manualy post-processing. But while CV looked reasonable, LB was dropping a bit, and we were not too confident in the validation setup for it, so we dropped it. Seeing other solutions, it definitely seems to be helpful, and I believe it could push our solution higher. </p> <h3>Efficiency sub</h3> <p>We also have an efficiency sub scoring 0.72 ensembling two smaller models on shorter token lengths running in 22 minutes. We use multiprocessing and ONNX. We probably lack a good 2nd stage LGB model to boost the score higher here. </p> <p>As always, cheers to my amazing team-mate <a href="https://www.kaggle.com/ilu000" target="_blank">@ilu000</a>.</p> <p>All training and inference code can be found <a href="https://github.com/psinger/kaggle-curriculum-solution" target="_blank">online</a>.</p> <p>We honestly never benchmarked it vs. pure Pytorch, but I did so a lot in past feedback competitions, and multiprocessed ONNX was always better for me.</p> <p>You are absolutely right that data-split multiprocessing is always better. So we also multiprocess by data, and not by model forward. Actually, the goal needs to be to see 400% CPU utilization in the kernel. Afaik, I can always only manage 200% with pure Pytorch multiprocessing, as it does not seem to properly use all virtual cores, only the two physical ones. With data-split MP, it is easier.</p> <p>Let's see where we end on the efficiency, first place will be interesting battle between <a href="https://www.kaggle.com/khabel" target="_blank">@khabel</a> and you :)</p> <p>Probably not 2x faster, because the virtual cores will give you less boost. Somehow the behavior is sometimes also really weird in (Kaggle) kernels. And tokenization also needs CPU time, so everything is a bit tricky. </p> <p>For ONNX I shared some details here: <a href="https://www.kaggle.com/competitions/feedback-prize-english-language-learning/discussion/370020" target="_blank">https://www.kaggle.com/competitions/feedback-prize-english-language-learning/discussion/370020</a></p> <p>This competition is in general quite useful to browse for different efficiency solutions.</p> <p>No, it worked out of the box. Actually our efficiency solution is also using deberta-v3-xsmall.</p>
Learning Equality - Curriculum Recommendations
5th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Learning Equality - Curriculum Recommendations <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, thanks to Kaggle, The Learning Agency Lab for interesting problem and kagglers for strong competiton!</p> <p>You can find inference code example notebook there: <a href="https://www.kaggle.com/code/churkinnikita/lecr-example-0-705-public" target="_blank">https://www.kaggle.com/code/churkinnikita/lecr-example-0-705-public</a><br> It uses only 1 [SBERT + LightGBM] model and receives 0.705 public / 0.741 private and 11th place on the final LB.</p> <p><strong>Validation setup</strong><br> From LB probing we know that there are approximately 9% of new channels (graphs) in the test set so I tried to mimic this logic.<br> I created 7Fold validation scheme when for every training fold we have 9-10% of unseen channels in the corresponding validation fold. But instead of 7 folds I used 1 or 2 folds to check improvements almost all the time. Correlation to the public LB was perfect. After some time I switched to classic 5Fold scheme (based only on topics ids) because learning that way led to better results on public (train set and test set are mixed). For the Stage2 model I used exactly the same folds for validation. For computing F2 score I filtered only "non source" data.</p> <p><strong>Stage 1</strong></p> <p>I basically used <a href="https://www.sbert.net/" target="_blank">SBERT</a> package for learning stage1 model.<br> My solution includes usage of 2 models: <code>paraphrase-multilingual-mpnet-base-v2</code> with long training (250 epochs) for every language and <code>all-distilroberta-v1</code> for English language (learned only on English subset) with shorter training time. Full solution scheme is depicted below:<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F578229%2Fac0b78b343b47fa5a7e9810393c7959c%2Flecr_v3.png?generation=1679045070712204&amp;alt=media" alt=""></p> <p><a href="https://www.sbert.net/docs/package_reference/losses.html#megabatchmarginloss" target="_blank">MegaBatchMarginLoss</a> was chosen as loss function, batch sizes in the range 270-310 provided the best performance.</p> <p>Text input for topics was computed according to this formula:<br> <code>topic_text_input = language + channel + category + level + topic_title + topic_description + context(aka breadcrumbs) + parent_description + cousines_titles + children_titles</code>. </p> <p>Inputs for contents is much simplier:<br> <code>content_text_input = language + kind + content_title + content_description + content_text</code>. </p> <p>I created an esquisse to illustrate how topic text input looks like:<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F578229%2F8969c113c081ba5a996f36763831752a%2Flecr_tree_v11.png?generation=1679056974715866&amp;alt=media" alt=""></p> <p>During training I used <a href="https://pytorch.org/docs/stable/data.html#torch.utils.data.WeightedRandomSampler" target="_blank">different weights</a> for "source" and "non source" data (0.35 and 0.65 respectively) to sample "non source" instances more often: thery are much harder for the model to predict correctly.</p> <p>New <a href="https://arxiv.org/abs/2302.06675" target="_blank">Lion</a> optimizer led to very good optimization results out-of-the box but carefully tuned good old <code>AdamW</code> won in terms of the final F2 score (but required much longer training time).</p> <p>Training for 250 epoch was extremely long: ~40 hours for 1 fold. I even bought RTX 4090 to be able to compute everything before the deadline but didn't manage to do that: for 250 epoch-model I computed only 3 folds (out of 5) and all-data model.</p> <p><strong>Stage 2</strong></p> <p>Second stage involved looking up for top 100 nearest contents for every topic in the corresponding language (for example, for topic in French we search only in French contents). After retrieving the list of possible candidates I generated ~30 features based on distance, language, text similarity <a href="https://en.wikipedia.org/wiki/Jaro%E2%80%93Winkler_distance" target="_blank">Jaro score</a> between titles, number of shared words), "geometry" of distance space, etc.</p> <p>There is list of best features with explanation:</p> <ol> <li>rank – number of neighbor (or ranked distance for topic).</li> <li>distance.</li> <li>dist_std – Variance of distances to contents (for particular topic).</li> <li>dist_min – Minimum of distances to contents (for particular topic).</li> <li>dist_range – <code>dist_max – dist_min</code>.</li> <li>dist_jump – argmax of distance differences for topic: where (on what neighbor number) the biggest “jump” in distances occurred.</li> <li>dist_apart – <code>rank – dist_jump</code>.</li> <li>dist_max_change – maximum change in distances to contents (for particular topic).</li> <li>margin_forward – <code>distance(topic, nearest_content[i+1]) - distance(topic, nearest_content[i])</code>.</li> <li>margin_backward – <code>distance(topic, nearest_content[i]) - distance(topic, nearest_content[i-1])</code>.</li> <li>dist_mm – MinMaxScaled distances for topic.</li> <li>margin_backward_mm – analogue of margin_backward but for dist_mm.</li> <li>topic_cumsum_dist – cumsum of distances for given topic: <code>candidates.groupby('topic_id')['dist'].cumsum()</code>.</li> <li>topic_language.</li> <li>topic_level.</li> <li>topic_len – length of topic’s title.</li> <li>jaro – <a href="https://en.wikipedia.org/wiki/Jaro%E2%80%93Winkler_distance" target="_blank">Jaro similarity score</a> between topic's title and current content's title.</li> <li>topic_max_jaro – maximum Jaro feature for topic.</li> <li>topic_median_jaro – median Jaro feature for topic.</li> <li>nshared_words – actually length of longest common substring between topic's title and content's title.</li> <li>nshared_words_lower – same as nshared_words but in lowercase scenario.</li> <li>is_max_nshared – does that particular pair (topic, content) have maximal nshared_words feature for that particular topic.</li> <li>jaro_forward – <code>jaro(nearest_content[i], nearest_content[i+1])</code>.</li> <li>content_desc_isnull – if content’s description is null.</li> <li>content_text_isnull – if content’s text is null.</li> <li>content_min_dist – minimum distance for content: <code>candidates.groupby('content_id')['dist'].min()</code>.</li> <li>content_max_dist – maximum distance for content.</li> <li>content_diff_dist – <code>dist – content_min_dist</code>.</li> </ol> <p>LightGBM in binary classification mode was used as the Stage2 model (we assign 0/1 labels for candidates based on the correlation file). I used only non-source data to train and evaluate GBM.</p> <p><strong>Prediction</strong></p> <p>For final prediction I normalized predicted probabilities using MinMaxScaling for every topic, something that looks like:<br> <code>prediction.groupby('topic_id')['proba'].apply(minmaxscale)</code></p> <p>This approach allowed to search for the optimal threshold that doesn't depend on particular topic id.</p> <p>Final solution is <em>majority voting</em> of models learned on full train set + fold models (5 models in total): content is considered relevant if it appears in recommendation of (at least) 3 out of 5 models. </p> <p>Thank you!</p> <p>1) Using stage1 model for every topic I pick 100 nearest neighbors (from contents). I label (topic, content) pairs according to <code>correlations.csv</code> file. I had ~3-4% of True Positives in my top-100 candidates. So I had 96-97 zeroes and 3-4 ones per topic on average. I haven't done anything to tackle imbalance, only tuned <code>scale_pos_weight</code> parameter a little bit.</p> <p>2) Using TFiDF features is probably a good idea. I haven't used embeddings or TFiDF features. My main feature block was distance-based features and aggregations based on distance, topics and contents.<br> For example, cumsum of distances for given topic: <code>candidates.groupby('topic_id')['dist'].cumsum()</code>.<br> I will update my solution soon and add explicit list of features and explanations.</p> <p>3) Fold #0: 0.67 F2 (without GBM) —&gt; 0.687 F2 (with LGB).</p> <p><a href="https://www.kaggle.com/trushk" target="_blank">@trushk</a> I hope I answered your questions, feel free to ask again, if I didn't.</p>
Learning Equality - Curriculum Recommendations
6th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Learning Equality - Curriculum Recommendations <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks. I added these information.</p> <p>Thank you very much. I added "CV strategy" section.</p> <h1>Summary</h1> <ul> <li>Retriever and reranker</li> <li>Transformer-based model for retriever</li> <li>GBDT-based model for reranker</li> <li>Inference code <a href="https://www.kaggle.com/code/iiyamaiiyama/llecr-ens8910-125128-149154-155157" target="_blank">https://www.kaggle.com/code/iiyamaiiyama/llecr-ens8910-125128-149154-155157</a></li> </ul> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2304617%2Fa101ba6640b7753f57c8d362e540c16a%2Flecr-reranker.png?generation=1678838816684059&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2304617%2Fe6c650f31f8520c19cb6e34b825d6ded%2Flecr-reranker2.png?generation=1678838830446492&amp;alt=media" alt=""></p> <h1>CV strategy</h1> <p>Simple 5fold random split with topic-id.<br> For stage 1 I used all data, including "source".<br> For stage 2, I used the same fold split as for stage 1, and created several models with and without "source" to ensure diversity.<br> CV and LB were well correlated.</p> <p>I trained a model based on channel GroupKFold and select it for one of my final submissions, but a simple random kfold gave slightly better results for CV and LB.</p> <h1>Retriever(stage1)</h1> <h2>Text</h2> <p>The training text was created as follows:</p> <ul> <li>Topics <br> Recursively traversed to the root node and added titles. Finally, the topic description was added. For example, (root title + parent1 title + … + topic title + topic description). </li> <li>Contents <br> The title and description were concatenated. The "text" column was discarded.</li> </ul> <p>model max positive score@50 CV F2@stage1 CV F2@stage2 public LB@stage2<br> sentence-transformers/LaBSE 0.8887 0.5462 0.6727 0.676<br> sentence-transformers/paraphrase-multilingual-mpnet-base-v2 0.8891 0.5429 0.6698 0.678<br> facebook/xlm-v-base 0.8869 0.532 0.669 0.671<br> xlm-roberta-base 0.8832 0.5388 0.6666 0.676<br> naive ensemble above four 0.9336 - 0.6916 (I didn't sub this)<br> my published submission - - 0.7152 0.707</p> <h2>Model</h2> <p>Each topic and its correlated contents were grouped together as one class. <br> The model was trained with ArcFace. Each model produces 768-dimensional embeddings. <br> Trained for 30 or 60 epochs, which took about 5 hours per fold. <br> The margin was gradually increased from 0.2 to 0.6 during training.<br> The following models were used for the final submission.</p> <ul> <li><a href="https://huggingface.co/sentence-transformers/LaBSE" target="_blank">sentence-transformers/LaBSE</a></li> <li><a href="https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2" target="_blank">sentence-transformers/paraphrase-multilingual-mpnet-base-v2</a></li> <li><a href="https://huggingface.co/facebook/xlm-v-base" target="_blank">facebook/xlm-v-base</a></li> <li><a href="https://huggingface.co/xlm-roberta-base" target="_blank">xlm-roberta-base</a></li> </ul> <h2>KNN</h2> <p>For each topic, find 50 nearest neighbor contents and these pairs are passed to the reranker(stage2).</p> <h1>Reranker(stage2)</h1> <p>When I submitted stage1 model only, I coudn't reach 0.6 on the public LB. I think this is because stage1 model doesn't have any topic-tree structure information. So I added many tree-based features for stage2 model.</p> <h2>Model</h2> <p>CatBoost and XGBoost were used for reranker.<br> Features were created for each pair, then GBDT model predicts probability that the pairs were correlated or not.</p> <h2>Example of features</h2> <ul> <li>Embeddings distance</li> <li>TF-IDF distance</li> <li>Whether the topic's siblings correlated the target content</li> <li>How many times the content was correlated in the channel</li> </ul> <h2>Ensemble</h2> <ul> <li>stage1 <br> Concatenate embeddings then find KNN</li> <li>stage2 <br> Average GDBT predictions, then select pairs as positive predictions above the threshold.</li> </ul> <h1>Post-processing</h1> <ul> <li>After stage2, if there were no predictions for a topic, the content with the highest predicted score was added.</li> <li>If two contents that were always correrated together in "correlations.csv", and one of them appeared in the prediciton, add the other one.</li> <li>Search best threshold for each channel<br> For channels not included in the training data, a fixed threshold was used. (based on CV across all channels)</li> </ul> <h1>Not worked</h1> <ul> <li>"text" information of contents <br> I cannot find good way to use "text" column.</li> <li>Transformer-based reranker <br> They were very prone to overfitting. </li> </ul> <h1>Appendix: my models</h1> <table> <thead> <tr> <th>Model</th> <th>max positive score@50</th> <th>CV F2@stage1</th> <th>CV F2@stage2</th> <th>public LB@stage2</th> </tr> </thead> <tbody> <tr> <td>sentence-transformers/LaBSE</td> <td>0.8887</td> <td>0.5462</td> <td>0.6727</td> <td>0.676</td> </tr> <tr> <td>sentence-transformers/paraphrase-multilingual-mpnet-base-v2</td> <td>0.8891</td> <td>0.5429</td> <td>0.6698</td> <td>0.678</td> </tr> <tr> <td>facebook/xlm-v-base</td> <td>0.8869</td> <td>0.5320</td> <td>0.6690</td> <td>0.671</td> </tr> <tr> <td>xlm-roberta-base</td> <td>0.8832</td> <td>0.5388</td> <td>0.6666</td> <td>0.676</td> </tr> <tr> <td>Naive ensemble above four</td> <td>0.9336</td> <td>-</td> <td>0.6916</td> <td>(I didn't sub this)</td> </tr> <tr> <td>My published submission</td> <td>-</td> <td>-</td> <td>0.7152</td> <td>0.707</td> </tr> </tbody> </table> <p>Yes. In that case, these topics and contents are treated as two separate classes:<br> {T1, C5, C6}<br> {T2, C5, C6, C7}</p> <p>Thank you.</p> <p>In my pipeline, the label(class) in Arcface training is similar to the topic-id based, but not exactly the same.</p> <p>If a topic has four correlated contents, they all have the same label.<br> It means the label has five(1+4) samples.</p> <p>However, if two topics share exactly same contents, they have same label.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2304617%2F032294a808b943ef1f9272085397f1fe%2Flecr-retriever.png?generation=1679367081684407&amp;alt=media" alt=""></p>
Learning Equality - Curriculum Recommendations
73rd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Learning Equality - Curriculum Recommendations <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you for organizing this competition.<br> It is not very high in the rankings, but I hope to be able to use it to compare with the top tier.<br> <a href="https://github.com/YujiK-github/kaggle_LECR" target="_blank">code</a></p> <h1>Solution Overview</h1> <h2>CV setting</h2> <p>GroupKFold(n_splits=3).split(groups=df["channel"]) with category non-source topic. <br> If multiple folds are used, there will be multiple retrieval results (1st_stage_model) for inference, and I could not think of a good way to integrate them, so I used only one fold.</p> <h2>1st stage model(Retriever)</h2> <p>Extract candidates from a large amount of content based on the cosine similarity of sentence embedding.</p> <ul> <li>DataLoader: <a href="https://github.com/UKPLab/sentence-transformers/blob/master/sentence_transformers/datasets/NoDuplicatesDataLoader.py" target="_blank">NoDuplicatesDataLoader</a></li> <li>Loss: <a href="https://github.com/UKPLab/sentence-transformers/blob/master/sentence_transformers/losses/MultipleNegativesSymmetricRankingLoss.py" target="_blank">MultipleNegativesSymmetricRankingLoss</a></li> <li>Evaluator: <a href="https://github.com/UKPLab/sentence-transformers/blob/master/sentence_transformers/evaluation/InformationRetrievalEvaluator.py" target="_blank">InformationRetrievalEvaluator</a><ul> <li>class inheritance(score: MAP@k -&gt; recall@k)</li></ul></li> <li>input sentence<ul> <li>content: title + [SEP] + description</li> <li>topic: title + [SEP] + description + [SEP] + context</li></ul></li> <li>A100, 40GB(colab pro+ premium GPU)<ul> <li>batch_size is the most important parameter when using MNSLoss</li></ul></li> </ul> <table> <thead> <tr> <th>filename</th> <th>model</th> <th>batch_size</th> <th>max_len</th> <th>Recall@50(fold0)</th> <th>Memory(/40GB)</th> </tr> </thead> <tbody> <tr> <td>exp004</td> <td>all-MiniLM-L6-v2</td> <td>768</td> <td>128</td> <td>0.8216</td> <td>34GB</td> </tr> <tr> <td>exp006</td> <td>all-mpnet-base-v2</td> <td>256</td> <td>128</td> <td>0.8502</td> <td>37.3GB</td> </tr> <tr> <td>exp007</td> <td>xlm-roberta-base</td> <td>224</td> <td>128</td> <td>0.8421</td> <td>36.4GB</td> </tr> <tr> <td>exp008</td> <td>paraphrase-multilingual-mpnet-base-v2</td> <td>224</td> <td>128</td> <td>0.8207</td> <td>36.4GB</td> </tr> </tbody> </table> <h2>2nd stage model(Re-ranker)</h2> <p>Determine whether pairs of extracted candidates are correctly paired.</p> <ul> <li>Using all-mpnet-base-v2 top_k:50 pairs and all positive examples</li> <li>input sentence <ul> <li>content: title + [SEP] + description</li> <li>topic: title + [SEP] + description + [SEP] + context</li> <li>Using tokenizer's text_pair allows both texts to be entered in a balanced manner.</li></ul></li> <li>Search for thresholds &amp; top_k.<ul> <li>I want to assign at least one content to each topic, so I assign k content of pairs of topics that were not assigned and have a large probability (which I also search for)</li></ul></li> </ul> <table> <thead> <tr> <th>filename</th> <th>model</th> <th>batch_size</th> <th>max_len</th> <th>f2_score<br>(fold0)</th> <th>add topk<br>(fold0)</th> <th>thres<br>(fold0)</th> <th>best_epoch</th> </tr> </thead> <tbody> <tr> <td>exp004</td> <td>all-MiniLM-L6-v2</td> <td>96</td> <td>256</td> <td>0.5004</td> <td>12</td> <td>0.051</td> <td>4</td> </tr> <tr> <td>exp006</td> <td>all-mpnet-base-v2</td> <td>32</td> <td>256</td> <td>0.5591</td> <td>14</td> <td>0.001</td> <td>4</td> </tr> <tr> <td>exp007</td> <td>xlm-roberta-base</td> <td>32</td> <td>256</td> <td>0.5630</td> <td>18</td> <td>0.00018</td> <td>4</td> </tr> <tr> <td>exp008</td> <td>paraphrase-multilingual-mpnet-base-v2</td> <td>32</td> <td>256</td> <td>0.5456</td> <td>10</td> <td>0.00049</td> <td>4</td> </tr> </tbody> </table> <h2>3rd stage model(Weighted average)</h2> <p>Ensemble to improve score.</p> <ul> <li>Find the weights that minimize BCEWithLogitsLoss using optuna</li> <li>Search for thresholds &amp; top_k.<ul> <li>I want to assign at least one content to each topic, so I assign k content of pairs of topics that were not assigned and have a large probability (which I also search for)</li></ul></li> </ul> <h2>3rd stage model(stacking lgb)</h2> <p>Stacking using LightGBM.</p> <ul> <li>metric: cross_entropy</li> <li>useful features: Cosine Similarity of two sentences(from 1st_stage_model), logits(from 2nd_stage_model)</li> <li>Search for thresholds &amp; top_k.<ul> <li>I want to assign at least one content to each topic, so I assign k content of pairs of topics that were not assigned and have a large probability (which I also search for)</li></ul></li> </ul> <h1>Published notebook</h1> <p><a href="https://www.kaggle.com/code/yujikomi/lecr-preprocessing-considering-language" target="_blank">https://www.kaggle.com/code/yujikomi/lecr-preprocessing-considering-language</a> <br> This notebook is based on the idea that the language and content of the topic is almost the same. I have shown that recall can be improved by using this.</p> <h1>CV vs LB vs PB</h1> <table> <thead> <tr> <th>file</th> <th>CV</th> <th>LB</th> <th>PB</th> </tr> </thead> <tbody> <tr> <td>submission/weighted_average_ver1</td> <td>0.5937</td> <td>0.556</td> <td>0.585</td> </tr> <tr> <td>submission/weighted_average_ver2</td> <td>0.5955</td> <td>0.557</td> <td>0.585</td> </tr> <tr> <td>submission/stacking_lgb</td> <td>0.6025</td> <td>0.567</td> <td>0.595</td> </tr> <tr> <td>exp004</td> <td>0.5004</td> <td>0.466</td> <td>0.487</td> </tr> <tr> <td>exp006</td> <td>0.5591</td> <td>0.511</td> <td>0.54</td> </tr> <tr> <td>exp007</td> <td>0.5630</td> <td>0.530</td> <td>0.561</td> </tr> <tr> <td>exp008</td> <td>0.5456</td> <td>0.516</td> <td>0.547</td> </tr> </tbody> </table> <h1>Execution environments</h1> <p>kaggle notebook, colab pro+</p> <h1>Ranking</h1> <ul> <li>LB: 75th</li> <li>PB: 73rd</li> </ul>
Learning Equality - Curriculum Recommendations
9th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Learning Equality - Curriculum Recommendations <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Many thanks to Kaggle and the hosts for organizing such an amazing competition towards a novel cause! Special thanks to <a href="https://www.kaggle.com/jamiealexandre" target="_blank">@jamiealexandre</a> for creating and maintaining a welcoming and energetic atmosphere. Congratulations to all the winners and thank you for sharing your valuable insights. Kaggle is an awesome community!</p> <p>Congratulations to our teammate <a href="https://www.kaggle.com/thedrcat" target="_blank">@thedrcat</a> for becoming a Kaggle Competitions Grandmaster! Very well deserved 🎉🎉💯</p> <h2>Summary</h2> <p>Our solution follows the classic stage 1 Retriever + stage 2 Re-Ranker pipeline, while using transformer based models for each stage. The final score of a (topic, content) pair is computed by blending retrievers similarity scores and re-ranker model probabilities. We then used a constant threshold to filter out candidates and performed minor post processing.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2125251%2F785e3131684a77547f1a7702851df007%2Flect_pipe_final.png?generation=1678980522046179&amp;alt=media" alt=""></p> <h2>Stage 1: Retrievers</h2> <p>We used the standard dual-encoder setup for training of retrievers. We paid close attention to retrieval focused pre-training of backbones such at they are properly warmed up for retrieval task in the domain of interest. Specifically, we used 3 different pre-training approaches</p> <h5>Inverse Cloze Task (ICT): basically a retrieval task where we defined the positive pairs to be:</h5> <ul> <li>content title, content description</li> <li>content title, content text chunks (generate many examples by splitting content texts at fixed intervals)</li> <li>topic title, topic description</li> </ul> <h5>Condenser pre-training of xlm-roberta-base</h5> <ul> <li>A specific pre-training approach designed to modify LM’s internal attention structure such that they produce better dense vectors for retrieval</li> <li>Reference: Condenser: a Pre-training Architecture for Dense Retrieval (<a href="https://arxiv.org/abs/2104.08253" target="_blank">https://arxiv.org/abs/2104.08253</a>) </li> </ul> <h5>RetroMAE pre-training of sentence-transformers/paraphrase-multilingual-mpnet-base-v2</h5> <ul> <li>Reference: RetroMAE: Pre-Training Retrieval-oriented Language Models Via<br> Masked Auto-Encoder (<a href="https://arxiv.org/pdf/2205.12035.pdf" target="_blank">https://arxiv.org/pdf/2205.12035.pdf</a>)</li> </ul> <p>Our final solution included an ensemble of 8 retrievers, trained using Multiple Negatives Ranking Loss, using following backbones:</p> <ul> <li>sentence-transformers/paraphrase-multilingual-mpnet-base-v2 + ICT</li> <li>xlm-roberta-base + condenser</li> <li>sentence-transformers/paraphrase-multilingual-mpnet-base-v2 + RetroMAE</li> <li>intfloat/simlm-base-msmarco-finetuned</li> <li>xlm-roberta-large + ICT</li> </ul> <p>The models were trained using in-batch negatives. We implemented a custom sampler to construct batches in the following 3 ways - </p> <ul> <li>Language wise batching: all topics in a batch are from the same language</li> <li>Channel wise batching: all topics in a batch are from the same channel</li> <li>Random batch</li> </ul> <p>During training we used roughly 80% language wise batching + 15% channel wise batching + 5% random batching.</p> <p>We also implemented a two-pass approach for one of our retrievers</p> <ul> <li>Pass 1: Training with MNRL loss</li> <li>Pass 2: Select top 20 negative samples for each topic and continue training using Contrastive loss for additional epochs. This boosted recall while not affecting F2 too much.</li> </ul> <p>For topic / content representations we tried many different combinations, mainly similar to my post from the initial phase of the competition: <a href="https://www.kaggle.com/competitions/learning-equality-curriculum-recommendations/discussion/376873" target="_blank">https://www.kaggle.com/competitions/learning-equality-curriculum-recommendations/discussion/376873</a> . We are not so sure if the topic encoder captured graph hierarchy properly. The information from ancestors were definitely helpful, but most likely it didn’t capture graph structural connotation of siblings, cousins and children.</p> <h2>Stage 2: Re-rankers</h2> <p>In order to prepare positive and negative examples for re-rankers we used</p> <ul> <li>Top 25 retrieved contents for each topic (out-of-sample)</li> <li>All positive contents from correlations.csv</li> </ul> <p>We adopted 2 different approached for re-ranker: </p> <ul> <li>Pointwise approach: only classify one (topic, content) pair at a time</li> <li>Listwise approach: classify all contents for a topic at once</li> </ul> <h4>Pointwise Approach</h4> <p>Even though it’s a standard binary classification task, the training of re-rankers with pointwise approach has been tricky with issues such as model de-generation / loss explosion etc. To overcome this, we </p> <ul> <li>enriched content representation by adding correlated topic breadcrumbs from source category</li> <li>enriched topic representation by adding breadcrumbs of very similar topics (&gt; 0.95 cosine similarity / BM25 search)</li> <li>more dense supervision using auxiliary loss, as explained below</li> </ul> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2125251%2F382b3d2a9e344ef5342f4de2a3353e8d%2Faux_loss.png?generation=1678980650337009&amp;alt=media" alt=""></p> <p>The auxiliary labels are computed on ancestors up to a certain depth (hyper parameter set at 5). If a content is linked to any descendant topic of an ancestor then the corresponding auxiliary label is 1. One example:</p> <pre><code>[CLS] [T_START] 9.4 Leaf electroscope; Leaf electroscope: features, charging and discharging [T_END] [T_START] 9.0 Electrostatics 1 (12 lessons); [T_END] [T_START] Physics; [T_END] [T_START] Form 1; [T_END] [T_START] Kolibri Library for Kenya; [T_END] [SEP] en; html5; Pinhole Magnifier; Who needs expensive optical equipment to see better?; Info = Pinhole Magnifier Who needs expensive optical equipment to see better? A pinhole in a card can act like a magnifying glass, helping your eye focus on an object that is very close to you. However, by limiting the amount of light that reaches your eye from the object, the pinhole also makes the object appear dimmer. Subjects: Keywords: Biology Anatomy &amp;amp; Physiology Perception Light[SEP] </code></pre> <p>Here the topic is <code>9.4 Leaf electroscope</code> and content is <code>en; html5; Pinhole Magnifier; Who needs expensive optical equipment to see better?; Info = Pinhole Magnifier Who needs expensive optical equipment to see better? ...</code>. They are not directly correlated, hence main label is 0. Aux label for parent <code>9.0 Electrostatics 1 (12 lessons)</code> is also 0 since none of the descendant topics of parent (siblings) is attached to the content. However, the grand parent of this topic <code>physics</code> has aux label of 1 since there exist at least one descendent topic of <code>physics</code> that is correlated with the content.</p> <p>This way training allowed dense supervision as compared to vanilla methods. Intuitively, this approach should help a model to better capture the domain of a topic, level of proficiency required and similar other dimensions.</p> <h4>Listwise Approach</h4> <p>In this approach, the candidate contents are classified together. The candidates are sorted as per their stage one cosine similarity scores. Hence this is some sort of meta model - using signals from stage 1 retrievers. </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2125251%2F18ef68f311630526ea2943f4f476b355%2Flistwise.png?generation=1678980928711984&amp;alt=media" alt=""></p> <p>We used the following backbones for re-ranker:</p> <ul> <li>microsoft/mdeberta-v3-base</li> <li>cross-encoder/mmarco-mMiniLMv2-L12-H384-v1</li> <li>MoritzLaurer/multilingual-MiniLMv2-L12-mnli-xnli</li> </ul> <p>Additionally, we tried a QA based re-ranker approach as mentioned here: Reranking as QA (<a href="https://www.kaggle.com/competitions/learning-equality-curriculum-recommendations/discussion/395029" target="_blank">https://www.kaggle.com/competitions/learning-equality-curriculum-recommendations/discussion/395029</a>) It gave good boost to our local validation score, but we couldn’t incorporate the model properly in our solution as it was trained on very last minutes…</p> <h2>Blending + Post Processing</h2> <p>We blended both stage 1 and stage 2 scores to compute the final score of a (topic, content) pair. The final list of contents for a topic was obtained by filtering the candidates using a constant threshold (irrespective of channel / language / new / old contents), which was tuned with a small subset of data (1/4 folds). This was definitely not optimal. After reading top place solutions, we realized, this step didn’t allow our pipeline to reach its full potential. In fact, our only gold place submission (completing 15 mins before deadline) happens to be the one where we increased the threshold slightly.</p> <p>We did the following post-processing:</p> <ul> <li>Handling null predictions: in case no candidate content was left after applying the threshold, we picked top 5 retriever candidates</li> <li>For each new content in the hidden set,<ul> <li>Find the top matched unseen topic</li> <li>If the above topic has same language and has cosine similarity above 0.5, then add in the (topic, content) pair.</li></ul></li> </ul> <h2>What Didn't Work</h2> <ul> <li>Second round training of retrievers with hard negatives / de-noised hard negatives</li> <li>We couldn’t make ArcFace loss work, we will revisit this, surely it’s a must try for metric learning tasks</li> <li>LGB / GNN based post-processing gave only minor boost, couldn’t focus on them more as we were pursuing many other ideas</li> <li>Query expansion</li> </ul> <h2>Team Members</h2> <p>Many thanks to my teammates (@trushk , <a href="https://www.kaggle.com/harshit92" target="_blank">@harshit92</a>, <a href="https://www.kaggle.com/syhens" target="_blank">@syhens</a>, <a href="https://www.kaggle.com/thedrcat" target="_blank">@thedrcat</a> ) for such an amazing collaboration and perfect teamwork! Thank you JarvisLabs (<a href="https://jarvislabs.ai/" target="_blank">https://jarvislabs.ai/</a>) for empowering us with GPU hours!</p> <p>Trushant Kalyanpur <a href="https://www.kaggle.com/trushk" target="_blank">@trushk</a><br> Harshit Mehta <a href="https://www.kaggle.com/harshit92" target="_blank">@harshit92</a><br> Yao He <a href="https://www.kaggle.com/syhens" target="_blank">@syhens</a><br> Darek Kłeczek <a href="https://www.kaggle.com/thedrcat" target="_blank">@thedrcat</a> <br> Raja Biswas <a href="https://www.kaggle.com/conjuring92" target="_blank">@conjuring92</a> </p>
Liberty Mutual Group - Fire Peril Loss Cost
After shock, Let's talk about solutions
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Liberty Mutual Group - Fire Peril Loss Cost <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Another question I have is how to take variance into consideration, regarding on selecting final submissions.</p> <p>Congratulations to winners and everyone who leaps up.</p> <p>We will talk about our approaches shortly after some cleaning.</p> <p>However, the particular approach I'm interested in is how to wisely decide 'this is it' and stop working :P</p> <p>Our final submission is a blending of three different models: extreme gradient boosting (xgb), neural network(nnet), linear models (BayesRidge, Larse, Ridge), which gives 0.307 private LB. It turns out that nnet and linear models have serious overfitting. Public/private score is nnet: 0.38/0.26, linear: 0.38/0.28. actually we didn't blend the best xgb we have, which gives 0.38/0.31, see attached file for details.</p> <p>Finally, after reviewing most of our submissions, these features seem to be good for both public and private data sets.</p> <p>var: 4,7,8,10,12~17</p> <p>geo: 31,32</p> <p>weather: 103,153.</p> <p>[quote=barisumog;53012]</p> <p>There's one little trick I used, which I guess others have also done. Instead of predicting the losses directly, I took the logarithm, and predicted on that.</p> <p>[/quote]</p> <p>We also see logarithm helps but I don't really understand why.</p>