Title of Competition
stringclasses
168 values
Title of Writeup
stringlengths
9
139
User
stringclasses
1 value
Writeup
stringlengths
419
61.5k
BirdCLEF 2023
3rd place solution: SED with attention on Mel frequency bands
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2023 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, thanks to the organizers Sohier Dane, Stefan Kahl, Tom Denton, Holger Klinck and all involved institutions (Kaggle, Chemnitz University of Technology, Google Research, K. Lisa Yang Center for Conservation Bioacoustics at the Cornell Lab of Ornithology, LifeCLEF, NATURAL STATE, OekoFor GbR and Xeno-canto).</p> <p>This year’s competition was a welcome change compared to previous challenges. The cMAP evaluation metric eliminated the need for threshold tuning, while the inference time limit encouraged a focus on efficient models with a good balance between accuracy and speed.</p> <p>In this post I want to briefly introduce some aspects of my solution. A more detailed description will be provided later as an update or in the upcoming working note.</p> <h3>Quick summary</h3> <ul> <li>Modified SED architecture with attention on frequency bands</li> <li>Addressing domain shift with reverb augmentation </li> <li>Using freezed TorchScipt models and precalculated inputs to speed up inference</li> <li>Addressing fluctuating inference time by setting a timer in inference notebook</li> </ul> <h3>Datasets</h3> <ul> <li>2021/2023 competition data</li> <li><a href="https://www.kaggle.com/c/birdsong-recognition/discussion/159970" target="_blank">2020 extended xeno-canto data</a></li> <li><a href="https://www.kaggle.com/datasets/mariotsaberlin/xeno-canto-extended-metadata-for-birdclef2023" target="_blank">2023 extended xeno-canto data (all files with 2023 species as primary label)</a></li> <li>BirdCLEF 2019 soundscapes (2021 species only &amp; nocall/noise)</li> <li><a href="https://dcase.community/challenge2018/task-bird-audio-detection" target="_blank">DCASE 2018 Bird Audio Detection Task (nocall/noise)</a></li> <li><a href="https://www.kaggle.com/datasets/theoviel/bird-backgrounds" target="_blank">Some nocall/noise files from datasets of previous competitions/solutions</a></li> </ul> <h3>Data preparation</h3> <ul> <li>Convert files to 32 kHz (if necessary)</li> <li>Convert extended (downloaded) xeno-canto files to FLAC</li> <li>Add duration information for each file to dataset metadata </li> <li>Add first 10 seconds interval of all xeno-canto files to training set</li> <li>Split training set into 8 folds (but mostly only 3 folds were used)</li> </ul> <h3>Model input</h3> <ul> <li>Log Mel spectrogram of 5 second audio chunks (n_fft= 2048, hop_length=512, n_mels=128, fmin=40, fmax=15000, power=2.0, top_db=100)</li> <li>Normalized to 0…255</li> <li>Converted to 3 channel RGB image </li> </ul> <h3>Model backbone/encoder architectures (from <a href="https://github.com/huggingface/pytorch-image-models" target="_blank">timm</a>)</h3> <ul> <li>tf_efficientnet_b0_ns</li> <li>tf_efficientnetv2_s_in21k</li> </ul> <p>I also tried resnet50, resnet152, tf_efficientnet_b2_ns, tf_efficientnet_b3_ns, tf_efficientnet_b4_ns, efficientformer_l3, tf_efficientnetv2_m_in21k, densenet121 and eca_nfnet_l0 but none of those were included in inference ensemble because in my case tradeoff between performance and inference time was not as good as for EffNetB0 or EffNetV2s.</p> <p>All models used pretrained ImageNet weights and served as feature extractor combined with a custom classification head. As classifier I used a modified SED head with attention on frequency bands instead of time frames. The intuition behind this is, that species in soundscapes often occupy different frequency bands. In original SED architecture, feature maps representing frequency bands are aggregated via mean pooling and attention is applied on features representing time frames. If attention is instead applied on frequency bands it can help to distinguish species vocalizing at the same time but with different pitch. The modification can be achieved simply by rotating the Mel spectrogram by 90 degrees before feeding it to the original SED network.</p> <h3>Data augmentation (esp. to deal with weak/noisy labels and domain shift between train/test set)</h3> <ul> <li>Select 5s audio chunk at random position within file:<ul> <li>Without any weighting</li> <li>Weighted by signal energy (RMS)</li> <li>Weighted by primary class probability (using info from pseudo labeling)</li></ul></li> <li>Add hard/soft pseudo labels of up to 8 bird species ranked by probability in selected chunk</li> <li>Random cyclic shift</li> <li>Filter with random transfer function</li> <li>Mixup in time domain via adding chunks of same species, random species and nocall/noise</li> <li>Random gain of signal amplitude of chunks before mix</li> <li>Random gain of mix</li> <li>Pitch shift and time stretch (local &amp; global in time and frequency domain)</li> <li>Gaussian/pink/brown noise</li> <li>Short noise bursts </li> <li>Reverb (see below)</li> <li>Different interpolation filters for spectrogram resizing </li> <li>Color jitter (brightness, contrast, saturation, hue)</li> </ul> <p>In soundscapes, birds are often recorded from far away, resulting in weaker sounds with more reverb and attenuated high frequencies (compared to most Xeno-canto files where sounds are usually much cleaner because the microphone is targeted directly at the bird). To account for this difference between training and test data (domain shift), I added reverb to the training files using impulse responses, recorded from the Valhalla Vintage Verb audio effect plugin. During training, I randomly selected impulse responses and convolved them with the audio signal with a 20% chance, using a dry/wet mix control ranging from 0.2 (almost dry signal) to 1.0 (only reverb).</p> <p>I didn’t use pretraining followed by finetuning, instead I trained on all 2021 &amp; 2023 species + nocall (659 classes). Background species were included with target value 0.3. For inference, predictions were filtered to the 2023 species (264 classes).</p> <h3>Speed up inference and deal with submission time limit</h3> <p>Due to variations in hardware and CPUs used to run inference notebooks, the number of models that could be ensembled varied. To prevent submission timeouts, I set a timer in the notebook to ensure completion within the 2-hour limit. If the timer reached approximately 118 minutes, inferencing was stopped and results were collected for models and file parts predicted up to that point. Results for unfinished models/file parts were masked before averaging predictions. Using this method, I couldn’t determine the exact number of models that could be ensembled. In early submissions, I could only ensemble 3 models without risking timeouts. Later, I prioritized inference speed over model diversity by using models with the same input (no variation in FFT size, number of Mel bands etc.). Now I could precalculate and save Mel spectrogram images to RAM for all test files in advance and reuse those for all models. I also converted models to TorchScript. With these optimizations, I could ensemble at least 7 models, depending on architecture (e.g. 4x EfficientNetB0 + 3x EfficientNetV2s) without setting a timer.</p> <p>My best single model used an EfficientNetV2s and scored 0.83386 on public leaderboard (0.74104 on private LB). The best single model with highest score on private leaderboard used a ResNet50 backbone (0.7482 private LB / 0.83288 public LB). My best ensemble on private LB (0.76365) was a mix of 8 models (5x EfficientNetB0 + 3x EfficientNetV2s) with simple mean averaging of single model predictions.</p> <h3>Some things I tried but gave up on because I couldn’t get them to work well enough</h3> <ul> <li><a href="https://arxiv.org/abs/2203.05482" target="_blank">Model soup</a></li> <li>MultiLabelSoftMarginLoss (instead of BCEWithLogitsLoss)</li> <li>Knowledge Distillation</li> <li>Finetuning using only 2023 species data</li> <li>Converting models to ONNX or openvino format (speed up was only achieved for small batch sizes)</li> <li>Any postprocessing (e.g. amplify probabilities of detected species in neighboring windows or entire file)</li> </ul> <h3>Citations</h3> <ul> <li><a href="https://arxiv.org/abs/1912.10211" target="_blank">Kong, Qiuqiang, Yin Cao, Turab Iqbal, Yuxuan Wang, Wenwu Wang, and Mark D. Plumbley. "PANNs: Large-Scale Pretrained Audio Neural Networks for Audio Pattern Recognition." arXiv preprint arXiv:1912.10211 (2019).</a></li> <li><a href="https://github.com/qiuqiangkong/audioset_tagging_cnn/" target="_blank">Code for PANNs paper</a></li> <li><a href="https://dcase.community/documents/workshop2019/proceedings/DCASE2019Workshop_Adavanne_45.pdf" target="_blank">S. Adavanne, H. Fayek &amp; V. Tourbabin, "Sound Event Classification and Detection with Weakly Labeled Data", Proceedings of the Detection and Classification of Acoustic Scenes and Events 2019 Workshop (DCASE2019), pages 15–19, New York University, NY, USA, Oct. 2019</a></li> <li><a href="https://www.kaggle.com/code/hidehisaarai1213/introduction-to-sound-event-detection/notebook" target="_blank">Introduction to Sound Event Detection by Hidehisa Arai</a></li> <li><a href="http://ceur-ws.org/Vol-2380/paper_86.pdf" target="_blank">Lasseck M (2019) Bird Species Identification in Soundscapes. In: CEUR Workshop Proceedings.</a></li> <li><a href="https://xeno-canto.org/" target="_blank">https://xeno-canto.org/</a></li> <li><a href="https://github.com/huggingface/pytorch-image-models" target="_blank">timm (PyTorch Image Models)</a></li> <li><a href="https://github.com/iver56/audiomentations" target="_blank">Audiomentations</a></li> </ul> <p><strong>Inference Notbook:</strong> <a href="https://www.kaggle.com/mariotsaberlin/bc23-3rd-place-solution-refactored" target="_blank">https://www.kaggle.com/mariotsaberlin/bc23-3rd-place-solution-refactored</a></p>
AMP®-Parkinson's Disease Progression Prediction
2nd Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: AMP®-Parkinson's Disease Progression Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Sorry for late sharing. First of all, thanks to competition organizers for hosting this interesting competition and my great teammates(@hyd,@rib). And congrats my friend rib for becoming GM.</p> <p>Our solution is simple, we found information related to "visit_month" very useful and information related to proteins are useless. So we focus on the samples, structures and models.</p> <h3>Samples</h3> <p>We used samples with visit_month in "[0, 6, 12, 18, 24 , 36, 48, 60, 72, 84]" for training. And found that our cv is much better correlated to LB.</p> <h3>Features</h3> <ul> <li>All about “visit_month”<br> 1) use visit_month as meta feature<br> 2) gap between visit_month and last visit_month<br> 3) times of visit for each patient</li> <li>Little about “protein”<br> use NPX’s ratio of each patient, instead of the original values<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1069377%2F39cefc5f531ed2c7b4d105fe209e44e6%2F1.png?generation=1687139866930379&amp;alt=media" alt=""></li> </ul> <h3>Structure</h3> <ol> <li>16 labels<br> Predict each “updrs” for predicted_month_diff=0,6,12,24<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1069377%2Fb41bb7ba7b53f7f259961020b11df086%2F2.png?generation=1687139918111830&amp;alt=media" alt=""></li> <li>4 labels<br> Use predicted_month_diff as a feature, and predict each one’s 4 updrs<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1069377%2Fb277c3ac795f150c6f4ae892dc4e485e%2F3.png?generation=1687139946997350&amp;alt=media" alt=""></li> </ol> <h3>Models</h3> <p>Simple MLP with different structures and parameters.Finally we blends with:</p> <ol> <li>Models training in different numbers of labels</li> <li>Models training in different structure of network</li> <li>Models training in different parameters of network</li> </ol> <p>code:<a href="url" target="_blank">https://www.kaggle.com/code/dc5e964768ef56302a32/2nd-solution</a></p>
BirdCLEF 2023
6th place solution: BirdNET embedding + CNN
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2023 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you to the host and Kaggle for organizing the competition.</p> <p>My solution is a combination of the embedding vectors of <a href="https://github.com/kahst/BirdNET-Analyzer/tree/d1f5a9c015d4419277cbb285e89d3f843a6bab49" target="_blank">BirdNET-Analyzer V2.2</a> and the CNN from the <a href="https://www.kaggle.com/competitions/birdclef-2021/discussion/243463" target="_blank">BirdCLEF 2021 2nd place solution</a>.</p> <h2>Model (BirdNET embedding + CNN)</h2> <p>In order to utilize the features of BirdNET, the embedding of BirdNET is concatenated to the output of the CNN of the <a href="https://www.kaggle.com/competitions/birdclef-2021/discussion/243463" target="_blank">BirdCLEF 2021 2nd place solution</a>.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1694617%2Fe7dcc6f87186d43df680592acb3f91d7%2Fmy_solution.png?generation=1684975539169704&amp;alt=media" alt=""></p> <h3>BirdNET embedding</h3> <p>The embedding vectors of BirdNET were created with <a href="https://github.com/kahst/BirdNET-Analyzer/blob/d1f5a9c015d4419277cbb285e89d3f843a6bab49/embeddings.py" target="_blank">BirdNET V2.2 embeddings.py</a>. I modified <a href="https://github.com/kahst/BirdNET-Analyzer/blob/d1f5a9c015d4419277cbb285e89d3f843a6bab49/audio.py#L7" target="_blank">BirdNET V2.2 audio.py</a> so that if the length of the audio data is shorter than BirdNET's sample rate (48000), the data is padded to output at least 1 second of embedding vector. Using V2.2 rather than the latest version BirdNET V2.3, slightly improved cv.</p> <h3>CNN</h3> <p>The backbone of the CNN from the BirdCLEF 2021 2nd place solution used timm's <code>eca_nfnet_l1</code> and <code>seresnext26t_32x4d</code>.</p> <h2>Training</h2> <p>After pretraining with data from BirdCLEF 2021 + BirdCLEF 2022, I train the CNN and other linear layers with data from BirdCLEF 2023. BirdNET is not trained.</p> <p>The input for training is data for 30 seconds. Since BirdNET outputs embedding vectors for 3 seconds of data, I averaged each of the embedding vectors for 30 seconds.</p> <p>In most experiments, cv was highest in the final epoch, so I included all data in the training set.</p> <p>The main training parameters are as follows:</p> <ul> <li><p>loss: BCEWithLogitsLoss</p></li> <li><p>MelSpectrogram</p> <ul> <li>sample_rate: 32000, window_size: 1024, hop_size: 320, fmin: 0, fmax: 14000, mel_bins: 128, power: 2, top_db=None</li></ul></li> <li><p>labels: primary label=0.9995, secondary label=0.4 or 0.5 (<a href="https://www.kaggle.com/competitions/birdclef-2022/discussion/327193" target="_blank">BirdCLEF 2022 3rd place solution</a>), label smoothing=0.01</p></li> <li><p>cv: primary_label StratifiedKFold(n_splits=5)</p></li> <li><p>optimizer: AdamW(weight_decay=1e-4)</p></li> <li><p>scheduler: warmup 0-3epoch(lr=3e-6-&gt;3e-4) + Cosine Annealing 3-70epoch(lr=3e-4-&gt;3e-6)</p></li> </ul> <h1>Augmentation</h1> <p>As in the previous two competitions, augmentation was important. I combined the following augmentations:</p> <ul> <li>audiomentations.Shift</li> <li>Mixup (<a href="https://www.kaggle.com/competitions/birdclef-2021/discussion/243463" target="_blank">BirdCLEF 2021 2nd place solution</a>)</li> <li>Gaussian Noise</li> <li>random lowpass filter</li> <li>random power (<a href="https://www.kaggle.com/competitions/birdclef-2021/discussion/243351" target="_blank">BirdCLEF 2021 5th place solution</a>)</li> <li>torchaudio.transforms.FrequencyMasking</li> <li>torchaudio.transforms.TimeMasking</li> </ul> <h1>Oversampling</h1> <p>The following Oversampling was performed, but I think the effect was minimal.</p> <ul> <li><p>Oversampling to have at least 20 of data for each class</p></li> <li><p>I doubled the number of training data that have two or more secondary labels because data with more than two secondary_labels have worse val_loss</p></li> <li><p>As in the <a href="https://www.kaggle.com/competitions/birdclef-2022/discussion/327044" target="_blank">BirdCLEF 2022 5th place solution</a>, files with reverb/echo effects created for some minority class data and added to the training data</p></li> <li><p>Strong augmentation was applied to the oversampled data</p></li> </ul> <h1>Inference</h1> <p>The CNN input is given 5 seconds of data for inference, and the BirdNET input is given the first 3 seconds of data for inference.</p> <ul> <li><p>To submit within 2 hours, I converted each model to ONNX</p></li> <li><p>As in <a href="https://www.kaggle.com/code/leonshangguan/faster-eb0-sed-model-inference" target="_blank">this notebook</a>, I also used ThreadPoolExecutor to speed up inference</p></li> </ul> <h1>Ensembling</h1> <p>The outputs of 2 models with different CNN backbones (<code>eca_nfnet_l1</code>, <code>seresnext26t_32x4d</code>) simply averaged.</p> <h1>What did not work</h1> <ul> <li>add background noise (<a href="https://www.kaggle.com/competitions/birdclef-2021/discussion/243463" target="_blank">BirdCLEF 2021 2nd place solution</a>)</li> <li>postprocess (<a href="https://www.kaggle.com/competitions/birdclef-2022/discussion/326950" target="_blank">BirdCLEF 2022 2nd place solution</a>)</li> <li>Speeding up inference using OpenVINO<ul> <li>Even if the model was converted to OpenVINO, the inference time was not much different from ONNX. I think I just used it wrong.</li></ul></li> <li>Using MFCC as input (as in <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9495150/" target="_blank">this paper</a>)</li> <li>Using CQT as input</li> <li>Using <code>FAMILY</code> etc. from <code>eBird_Taxonomy_v2021.csv</code> for prediction (as in <a href="https://arxiv.org/pdf/2110.03209.pdf" target="_blank">this paper</a>)</li> </ul> <h1>Rough lb history</h1> <table> <thead> <tr> <th>name</th> <th>Private Score</th> <th>Public Score</th> </tr> </thead> <tbody> <tr> <td>BirdCLEF 2021 2nd place solution CNN (with BirdCLEF 2021 + BirdCLEF 2022 pretrain)</td> <td>0.71</td> <td>0.80</td> </tr> <tr> <td>BirdCLEF 2021 2nd place solution CNN + Augmentation + BirdNET embedding</td> <td>0.74</td> <td>0.82</td> </tr> <tr> <td>BirdCLEF 2021 2nd place solution CNN + Augmentation + BirdNET embedding + Oversampling + Ensembling</td> <td>0.75</td> <td>0.83</td> </tr> </tbody> </table>
BirdCLEF 2023
2nd place solution: SED + CNN with 7 models ensemble
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2023 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations to all the winners! Thanks to Kaggle and Cornell Lab of Ornithology for hosting this interesting competition.</p> <p>This is my first solo gold medal and I am glad to have this result.</p> <p>This competition shared a lot of similarity to the past BirdClef competitions(2020/2021/2022). Thus I spent a lot of time gathering the solution shared by the top teams in the past competitions. Special thanks to all of you for sharing such important information!</p> <p>Let me briefly introduce my solution. I will update the solution for more details in a couple of days.</p> <h1>Most important (7 models ensemble!)</h1> <p>Please see the notebook below.<br> <a href="https://www.kaggle.com/code/honglihang/openvino-is-all-you-need" target="_blank">openvino is all you need!!</a></p> <h1>Training data</h1> <p>Here is my training data.</p> <ul> <li>2023/2022/2021/2020 competition data</li> <li><a href="https://www.kaggle.com/competitions/birdclef-2023/discussion/398318" target="_blank">2020 additional competition data</a></li> <li>additional training data from xeno-canto, including 2023 comp species in both foreground and background(records with 2023 comp species only in background which is less than 60 seconds are included). </li> </ul> <p>I intended to collect more records from ebird site, but I realized that ebird data is not public and cannot be used. I <a href="https://www.kaggle.com/competitions/birdclef-2023/discussion/393023" target="_blank">asked the host</a> and confirmed that. Thanks <a href="https://www.kaggle.com/tomdenton" target="_blank">@tomdenton</a> answering my questions.</p> <p>Thus, my training pipeline does not contain records from ebird site.</p> <h1>Model Architecture</h1> <p>First, I used SED architecture. The same as yours.</p> <p>backbones are:</p> <ul> <li>tf_efficientnetv2_s_in21k</li> <li>seresnext26t_32x4d</li> <li>tf_efficientnet_b3_ns</li> </ul> <p>All of them are trained on 10sec clip.</p> <p>Second, I used CNN proposed by <a href="https://www.kaggle.com/competitions/birdclef-2021/discussion/243463" target="_blank">2nd place of 2021 competition</a></p> <p>backbones are:</p> <ul> <li>tf_efficientnetv2_s_in21k</li> <li>resnet34d</li> <li>tf_efficientnet_b3_ns</li> <li>tf_efficientnet_b0_ns</li> </ul> <p>All except b0 are trained on 15sec clip. b0 is trained on 20sec clip.</p> <h1>Pseudo Labeling and Hand Labeling</h1> <p>I have used SED model to generate pseudo label and extracted the potential nocall using quantile threshold. Then, I hand labeled the potential nocall by hearing the record. I hand labeled about 1800 records but did not see improvement. Maybe the pseudo label contains more FP rather than FN. I did not have time to further investigate the prediction.</p> <h1>Model Training</h1> <p>augmentations:</p> <ul> <li>GaussianNoise</li> <li>PinkNoise</li> <li>Gain</li> <li>NoiseInjection</li> <li>Background Noise(nocall in 2020, 2021 comp + rainforest + environment sound + nocall in freefield1010, warblrb, birdvox)</li> <li>PitchShift</li> <li>TimeShift</li> <li>FrequencyMasking</li> <li>TimeMasking</li> <li>OR Mixup on waveforms</li> <li>Mixup on spectrograms.</li> <li><a href="https://www.kaggle.com/competitions/birdsong-recognition/discussion/183269" target="_blank">With a probability of 0.5 lowered the upper frequencies</a></li> <li>self mixup for records with 2023 species only in background.(60sec waveform -&gt; split to 6 * 10sec -&gt; np.sum(audios,axis=0) to get a 10sec clip)</li> </ul> <p>I have used weights (computed by primary_label and secondary_labels) for Dataloader in order to cope with unbalanced dataset.</p> <h1>Training stages</h1> <p>For training I have used 2 stage training:</p> <ol> <li>Pretrain on all data(834 species).</li> <li>Finetune on 2023 species(264 species).</li> </ol> <p>In both stages, I first train model with CrossEntropyLoss, and then train on with BCEWithLogitsLoss(reduction='sum'). Model converges faster with CrossEntropyLoss than BCEWithLogitsLoss, but BCEWithLogitsLoss gives better score.</p> <p>To give more diversity, models are trained on different windows and different mixup rate, and some of them only trained on CrossEntropyLoss. And also 3 of the models are fintuned on 30s clip.</p> <h1>CV strategy</h1> <ul> <li>For each validation sample - slice the first 60 seconds to pieces -&gt; predict each piece -&gt; max(sample_predictions, dim=pieces).</li> </ul> <p>CV does not show correlation with LB, but it seems that the right ways to improve the LB are those which do not significantly decrease CV. So I monitored the CV when tuning the pipeline.</p> <h1>Inference</h1> <p>For SED model, feed model 10 sec chunk BUT apply head only on centered 5 sec reduced CNN image and use max(framewise, dim=time).</p> <p>Also, tta(2s) is used for SED model.</p> <p>Important: convert pytorch model to openvino model significantly reduce inference time(about 40%). (eca_nfnet_l0 backbone ONXX cannot be converted to openvino because the stdconv layer in timm use train mode of F.batch_norm in forward method). That is the magic of ensembling 7 models.</p> <h1>Ensemble</h1> <p>I spent quite a lot of time understanding the metrics. The ensemble is as followed:</p> <ol> <li>(weighted average, 0.84 on LB, 0.76 on private) Apply weighted average on raw logit. This ensemble does not make sense for me because the output logit of models differ and should not be simply added, otherwise the result is biased. But considering that the absolute value of logit may also contribute to the score and it does give the best LB, so I choose it for final submission to have a gamble.</li> <li>(rank average, 0.83 on LB, 0.75 on private) Convert the logit to rank and apply weighted average on rank. I think this is the reasonable way to ensemble, considering that I don't have a reliable CV. <strong>Be careful that in this comp submission is padded with 5 rows of 1, thus the ranking should start from 0 to prevent the largest ranking to be 1 after convert rankings to percentile form.(For example, 0, 0.333, 0.666 is good while 0.333, 0.666, 1.0 is bad. otherwise the score will decrease due to the potential false positive of those ranked 1.0) </strong></li> </ol> <p>Basically all of my single model's performance reaches about 0.81-0.82 on LB, so the weight of each model is similar to give diversity although higher public lb requires large weight of sed v2s and sed b3_ns.</p> <p>Looking at the private LB, weighted average seems to be better. emmm……Why?</p> <h1>What did not work</h1> <ul> <li><a href="https://www.kaggle.com/competitions/birdsong-recognition/discussion/183269" target="_blank">contrast</a></li> <li>EMA</li> <li>BirdNet</li> <li>adjusting logit according to previous and next 5s clip</li> <li>q transform</li> </ul> <p>Inference Notebook: <a href="https://www.kaggle.com/code/honglihang/2nd-place-solution-inference-kernel" target="_blank">https://www.kaggle.com/code/honglihang/2nd-place-solution-inference-kernel</a><br> github: <a href="https://github.com/LIHANG-HONG/birdclef2023-2nd-place-solution" target="_blank">https://github.com/LIHANG-HONG/birdclef2023-2nd-place-solution</a></p>
Google - Isolated Sign Language Recognition
Solution - Single transformer without val dataset
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google - Isolated Sign Language Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to the organisers of the PopSign Games and all of the Kaggle community for running such a great competition.</p> <p>More importantly, my heartfelt thanks go to both <a href="https://www.kaggle.com/markwijkhuizen" target="_blank">@markwijkhuizen</a> and <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a>. MARK contributed a very strong transformer framework, which is the machine learning pipeline for me in this competition. And I will find daily inspiration in <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a>'s discussion post. </p> <p>As a learner, this competition has taken my machine learning skills a step further. Therefore, I would like to <strong>record the results of my experiment here</strong>.</p> <h1>Model Architecture</h1> <p>Shallow networks are the best choice.</p> <ul> <li>Single shallow transformer base (Refer to <a href="https://www.kaggle.com/code/markwijkhuizen/gislr-tf-data-processing-transformer-training" target="_blank">version 5</a>)<ul> <li>Added <code>label smoothing</code>.</li> <li>Used warm up to <code>lr</code>.</li> <li>Remove all LayerNorm.</li></ul></li> </ul> <p>The main changes I have made that have led to improvements:</p> <ul> <li>Increase <code>Embedding</code> to 600.</li> <li>Add a <code>Gaussian noise (0.01)</code> to the FC layer of the Embedding layer.</li> <li>Increase <code>epoch</code> to 230</li> <li>Deepens the depth of the last FC layer in the transformer layer.</li> <li>Late Dropout. 0---0.3---0.1</li> </ul> <h1>Feature Engineering</h1> <p>Feature engineering was the most difficult part of my entire experiment. I came up with a lot of ideas, but most of them ended up being submission errors.</p> <p>Here is my summary of options that can bring about an uplift and are feasible.</p> <ul> <li><p>Use of <code>lips</code>, <code>eyes</code> (regardless of left or right eye), <code>main hand</code> (21 landmarks), and <code>main pose</code> (6 landmarks)</p> <ul> <li>Main hand means normalize the right hand to the left hand. The formula is <code>x = 1 - x</code>.</li> <li>The formula of main pose is <code>x = 1.2 -x</code>. </li></ul></li> <li><p>Set the input frame count <code>threshold</code> to <strong>48</strong>.</p> <ul> <li>Less than that is filled with 0.</li> <li>More than that is compressed using the <code>Bilinear Interpolation</code> method. (I think it only work well to data where the frame rate is a multiple of 48)</li> <li>Removal of data with only <code>one frame</code>.</li></ul></li> <li><p>Add <code>motion feature</code> for coordinate X.</p></li> <li><p>Add <code>Hand distance feature</code>. (Get inspired by <a href="https://www.kaggle.com/josephzahar" target="_blank">@josephzahar</a>'s <a href="https://www.kaggle.com/code/josephzahar/distance-angle-based-features-using-keras" target="_blank">post</a>)</p> <ul> <li>Calculate the Euclidean distance between the discontinuous joints of the hand.</li></ul></li> </ul> <p>Hence, the dimensionality of each data in my pipeline is 3 dimensions <code>(x, y, motion)</code>. For the Hand, it is 213<code>(x, y, motion, distance)</code>. </p> <h1>Augmentation</h1> <p>There is no need for mirror(flip) augmentation in this pipeline, as the hand is already normalised in the pre-processing.</p> <p>In augmentation, I only found rotation to be little useful for me. Specifically, I rotated the data with a small sample size (<code>data[-42:]</code>) by 15 degrees up and down. But I think my approach is very poor.</p> <h1>Future Work</h1> <ul> <li>How to distinguish similar words, like awake and wake?</li> <li>Solve the problem of <code>keras_model_converter.target_spec.supported_types = [tf.float16]</code> not working for me.</li> <li>How to do rotation correctly?</li> </ul> <h1>Summary of Best Solutions (In processing)</h1> <h3>"How To Improve Best Public transformer basded Notebook" from <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a></h3> <ul> <li><p>Original Post</p> <ul> <li><a href="https://www.kaggle.com/competitions/asl-signs/discussion/406302" target="_blank">https://www.kaggle.com/competitions/asl-signs/discussion/406302</a></li></ul></li> <li><p>Corresponding Notebook</p> <ul> <li><a href="https://www.kaggle.com/code/cdeotte/improve-best-public-notebook-lb-0-76" target="_blank">https://www.kaggle.com/code/cdeotte/improve-best-public-notebook-lb-0-76</a></li></ul></li> <li><p>Keypoints: </p> <ul> <li>Reduce batch size and increse transformer block.</li> <li>Ensemble with different model seeds.</li></ul></li> <li><p>All changes</p> <ol> <li><p>Train 1 model =&gt; Train 4 models</p></li> <li><p>Add Time Scale augmentation</p></li> <li><p>Ensemble and apply TFLite FP16 quantization</p></li></ol> <p>Change the following parameters:</p> <ol> <li><p>INPUT_SIZE, 64 =&gt; 12</p></li> <li><p>BATCH_ALL_SIGNS_N, 4 =&gt; 1</p></li> <li><p>N_EPOCHS, 250 =&gt; 120</p></li> <li><p>LANDMARK_UNITS, 384 =&gt; 224</p></li> <li><p>UNITS, 512 =&gt; 376</p></li> <li><p>NUM_BLOCKS, 2 =&gt; 3</p></li> <li><p>MLP_RATIO, 4 =&gt; 3</p></li> <li><p>MLP_DROPOUT_RATIO, 0.40 =&gt; 0.30</p></li> <li><p>Remove random frame masking</p></li></ol></li> <li><p>The results of the experiments combined with those tricks</p> <ul> <li><p>Just use a-i parameters changes in my solution (Step i is necessary, otherwise it would be by gradient messages.). </p> <ul> <li>Public: 0.7525. </li> <li>Private: 0.8340</li></ul></li> <li><p>Ensemble 2 seed models without quantization. (Surprised by this decline, I am looking into the reasons for it.)</p> <ul> <li>Ensemble Public: 0.7443 <ul> <li>Model0 Public: 0.73</li> <li>Model1 Public: 0.7314</li></ul></li> <li>Ensemble Private: 0.8276<ul> <li>Model0 Private: 0.814</li> <li>Model1 Private: 0.816</li></ul></li> <li><strong>I have found the reason. I change <code>lr</code> to <code>0.4*lr</code>, cause <code>BATCH_ALL_SIGNS_N</code> reduces from 4 to 1. This action was wrong, because I use rotation augmentation before training. In other words, all changing variables can affect the choice of <code>lr</code>. If those variables stay same but <code>batch_size</code>, We need to make the change in <code>batch_size</code> proportional to the change in <code>lr</code>.</strong> (Answer from this <a href="https://www.kaggle.com/competitions/asl-signs/discussion/406302#2248253" target="_blank">post</a>)<br> After cancelling the change of <code>lr</code>. The result is shown below</li> <li>Ensemble Public: 0.7683 </li> <li>Ensemble Private: 0.8445</li></ul></li></ul></li> <li><p>To be resolved</p> <ul> <li><p>What is Time Scale augmentation? and How to remove random frame masking?</p></li> <li><p>Why the learning rate is proportional to the batch size?</p> <ul> <li>Answer: This is correct with the other model parameters held constant. However, this is wrong if other variables (epoch, dropout rate, etc.) change.</li></ul></li> <li><p>Where this from "Accurate local validation scheme which estimates LB score" ?</p></li></ul></li> </ul> <h1>Rule of thumbs</h1> <h2>How to choose learning rate</h2> <ul> <li><p><a href="https://www.kaggle.com/code/isbhargav/guide-to-pytorch-learning-rate-scheduling/notebook" target="_blank">https://www.kaggle.com/code/isbhargav/guide-to-pytorch-learning-rate-scheduling/notebook</a></p></li> <li><p>From this <a href="https://www.kaggle.com/competitions/asl-signs/discussion/406302#2248264" target="_blank">post</a></p></li> </ul> <blockquote> <p>If training requires only a few epochs (i.e. 1, 2, 3, 4, or 5 epochs) like NLP models, I usually just go with constant LR. Then i look at validation metric and if it increases in the last epoch or two, perhaps I will use <code>LR</code> for the first epochs and <code>LR * 0.1</code> (or 0.3) for the last epochs.</p> <p>When a model requires many epochs like 20, 40, 80, 160. I usually go with cosine schedule (like this comp). These two techniques work well in most cases.</p> </blockquote>
Parkinson's Freezing of Gait Prediction
3rd Place Solution: Transformer+GRU
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Parkinson's Freezing of Gait Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>The first and second place solution have much better fits to the individual defog/tdcs datasets; this solution fit one model for both.</p> <p>Architectures: Deberta/VisionTransformer/VisionTransformerRelPos -&gt; LSTM/GRU<br> (typically 2-4 layers, with single-layer RNN)</p> <p>Patch sizes are 7-13, with sequences of 192-384 patches.</p> <p>All sequences have heavy augmentation--stretching, cropping, ablation, accumulated Gaussian noise, etc.</p> <p>--</p> <p>See data.py and model.py for details:</p> <p>Inference Code: <a href="https://www.kaggle.com/code/stochoshi/fork-of-walk3" target="_blank">https://www.kaggle.com/code/stochoshi/fork-of-walk3</a><br> Additional Code: <a href="https://www.kaggle.com/datasets/stochoshi/walkdata4" target="_blank">https://www.kaggle.com/datasets/stochoshi/walkdata4</a></p>
Stable Diffusion - Image to Prompts
3rd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Stable Diffusion - Image to Prompts <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks for organizers hosting a nice competition.</p> <p>My approach is based on CLIP model with directly predicting 384 embedding vector following <a href="https://www.kaggle.com/code/shoheiazuma/stable-diffusion-vit-baseline-train" target="_blank">this baseline notebook</a>.</p> <h2>Dataset</h2> <p>I used about ~400K data. The validation score of Diffusion DB was well correlated to LB score but still 0.025 ~ 0.03 gap.</p> <ul> <li><p><a href="https://vizwiz.org/tasks-and-dathasets/image-captioning/" target="_blank">Vizwiz image caption</a> ~70k</p> <ul> <li>This was key dataset for my experiment.</li> <li>Training with this data, local val vs LB score -&gt; 0.5415/0.5309, but without this 0.5528/0.48765</li> <li>I think this caption dataset is generally more descriptive, diverse and longer than COCO.</li> <li>Original training samples are 23K at this dataset, but each training sample has max 5 captions, so sampled 3 captions for each training sample.</li></ul></li> <li><p>Diffusion DB, 300k</p> <ul> <li>only using SD2 images, which I personally generated for improving local val/LB correlation</li> <li>210K samples from prompt filtering and another 80k images from hard sampling using a trained model.</li></ul></li> <li><p>COCO, 25k</p> <ul> <li>Under training with Vizwiz, this dataset contribution was relatively low.</li></ul></li> <li><p><a href="https://www.kaggle.com/datasets/motono0223/gustavosta-stable-diffusion-prompts-sd2-v2" target="_blank">Lexica.art</a>, 10k</p></li> </ul> <h2>Model</h2> <ul> <li>only using CLIP models<ul> <li>My baseline model was ViT Base 224(laion2B), almost all experiments were done on this model. Final score for this model was, local val/Public/Private=0.6402/0.61402/0.61357</li> <li>Single best model. ViT L 336(open ai), local val/Public/Private=0.6658/0.63557/0.63425<ul> <li>I could not find good training parameters for larger models, ViT-H and ConvNext xxLarge.</li></ul></li> <li>Best submission, ensemble of ViT-L(laion2b, openai, datacompxl), ViT-H, and ConvNext Large/xxLarge, local val/Public/Private=0.6791/0.64935/0.64814</li></ul></li> </ul> <h2>Training</h2> <h3>CLIP fine tuning</h3> <p>Fine tuning CLIP model needs extra care compared to imagenet weight, we have to keep original CLIP weight as possible as we can to get best performance.<br> I found the following two methods improve score by ~ +0.02 in total and used a lot of time to find best hyperparameters.</p> <ol> <li><p><a href="https://arxiv.org/pdf/2202.10054.pdf" target="_blank">LP-FT(Linear Probe and then Fine Tune)</a></p> <ul> <li>This was studied at <a href="https://arxiv.org/ftp/arxiv/papers/2210/2210.08473.pdf" target="_blank">the top-1 solution of previous competition</a> and also other teams at this competition.</li></ul></li> <li><p>Combination of EMA and layer-wise learning rate decay</p> <ul> <li>This has been studied at <a href="https://arxiv.org/abs/2212.06138" target="_blank">this CLIP fine tuning paper</a>.</li> <li>Best parameter for ViT-B was (EMA decay, layer-wise decay) = (0.9998, 0.4)</li> <li>But this did not work for ViT-L, layer-wise decay = 0.6 was best. It seemed decay factor for initial layer should be around 1e-6 so we have to choose large decay late for larger models.</li></ul></li> </ol> <p>These two methods has not only direct effect but also indirect effect, because by these methods we can increase learning rate with keeping CLIP original weight and get better result.</p> <h3>Augmentation</h3> <ul> <li><p>Crop/RandomErase/RandAug(without posterize, solarize and equalize)</p> <ul> <li>This contributed to local val score, ~ +0.007 and this enabled longer training.</li> <li>Strong image transformation did not fit this competition, I tried to find weak one.</li> <li>Horizontal Flip worsen performance</li></ul></li> <li><p>Same prompt and different seed</p> <ul> <li>10% of Diffusion DB and 20% of Vizwiz samples have 3 images which are belong to the same prompt but different SD2 generation seed. For each training epoch one of 3 images randomly chosen.</li> <li>This also contributed to the score and validation loss curve.</li></ul></li> <li><p><a href="https://github.com/ShieldMnt/invisible-watermark" target="_blank">Invisible watermark</a> augmentation</p> <ul> <li>Adding watermark on input images during training with the following <a href="https://github.com/Stability-AI/stablediffusion/blob/334969331438036c1b4fb529c262f78499870d86/scripts/txt2img.py#L363" target="_blank">stability ai repository</a></li> <li>This did not contribute much to the score, only ~ +2.0e-5</li></ul></li> </ul> <h3>Code</h3> <p>This is <a href="https://www.kaggle.com/code/sai11fkaneko/3rd-place-solution/notebook" target="_blank">my submission notebook</a>.</p>
Parkinson's Freezing of Gait Prediction
21st place solution: Conv1d with denoising
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Parkinson's Freezing of Gait Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>My 21st solution is based on a great public notebook <a href="https://www.kaggle.com/code/coderrkj/parkinson-fog-pred-conv1d-separate-tf-model" target="_blank">here</a>. The chosen one of my submissions is an ensemble of 5 similar models(<strong>public LB: 0.433, private LB: 0.324</strong>). My best private LB is a single model(<strong>public LB: 0.421, private LB: 0.327</strong>), I publish it <a href="https://www.kaggle.com/code/takanashihumbert/gait-single-models-inference/notebook" target="_blank">here</a>.</p> <h2>model</h2> <p>The basic structure of my model composes of 3 different <code>Conv1d</code> blocks, then <code>concatenate</code> and <code>flatten</code>, finally with a 4 labels output.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3110858%2F0662912f80e49cb1bb8a7fb91a77364c%2FWX20230609-0907112x.png?generation=1686272859248587&amp;alt=media" alt=""><br> The features are <strong>Time_frac</strong>, <strong>AccV</strong>, <strong>AccML</strong>, <strong>AccAP</strong>, <strong>V_ML</strong>, <strong>V_AP</strong>, <strong>ML_AP</strong>(the last 3 means the difference value between two of AccV, AccML, and AccAP). And I know <strong>Time_frac</strong> is a powerful(improve nearly 0.1) but controversial one. I use multi-class, so the labels are <strong>StartHesitation</strong>, <strong>Turn</strong>, <strong>Walking</strong>, and <strong>Normal</strong>.</p> <h2>denoising</h2> <p>My denoising code is very simple:</p> <pre><code>def wavelet_denoising_2(x, wavelet=): coeffs = pywt.wavedec(x, wavelet, mode=) coeffs[(coeffs)] *= coeffs[(coeffs)] *= = pywt.waverec(coeffs, wavelet, mode=) (x)%==: = [:] </code></pre> <h2>window size</h2> <pre><code> = = = = </code></pre> <p>I think that's all, nothing special. Thanks. :)<br> Looking forward to top and interesting solutions!</p>
March Machine Learning Mania 2023
8th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: March Machine Learning Mania 2023 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Firstly, I would like to thank Raddar for his awesome public codes, as well as to Kaggle for hosting this great competition and providing an awesome platform. I have also learned a lot from other fellow Kagglers, so thank you all!</p> <p><strong>Approach</strong><br> This was my third year participating in the March Madness competition. Based on my prior experience, I decided to take a conservative approach through an ensemble of various models.</p> <p><strong>Data Preparation</strong><br> For feature selection, I used the features from Raddar’s public notebook. I created two sets of features: one with Team Quality features and the other without. I removed Team Quality features for some models.</p> <p><strong>Modeling</strong><br> I used five different models: LGBM, Logistic Regression, CatBoost, XGB, and SVM. However, I dropped SVM as it was too slow to train. I combined the outputs of the remaining four models equally. While I initially used cross-validation to measure each model’s performance and used a weighted average of the output of each model in the prior year’s competition, I found that this approach did not work well. Therefore, I decided to use equal weighting to avoid overfitting this year.</p> <p><strong>Post-Processing</strong><br> In the past, overriding the whole seeds 1-4 and 13-16 did not work well for me. So, this year I only overrode seeds 1-2 and 14-15. Additionally, instead of giving 99% probability for seeds 1 and 2, I took a conservative approach by giving 95% and 90% probability, respectively.</p> <p>Overall, I believe that this approach with the tremendous blessings from God helped me win my first prize and gold medal in Kaggle after 5 years!</p> <p>Code link: <a href="url" target="_blank">https://www.kaggle.com/code/pwh70411/8th-place-solution</a></p>
Parkinson's Freezing of Gait Prediction
10th place solution: U-Net with squeeze & excitation
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Parkinson's Freezing of Gait Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congrats to the winners! Thanks to the competition organizers for putting together an interesting challenge! Here's my solution.</p> <h1>Model architecture</h1> <p>I used a 1D convolutional U-Net with squeeze-and-excitation and 5 encoder/decoder pairs.</p> <p>Squeeze-and-excitation seemed to be very beneficial, presumably because it allows the model to take global context into consideration while classifying each sample. I processed the data in extremely long context windows (10240 samples).</p> <h1>Features</h1> <ul> <li><strong>Raw acleration values:</strong> AccV, AccML, AccAP<ul> <li>I did not normalize these in any way. </li></ul></li> <li><strong>Time features:</strong> <br> <code> df['NormalizedTime'] = df['Time'] / df['Time'].max() </code><br> <code> df['SinNormalizedTime'] = np.sin(df['NormalizedTime'] * np.pi) </code></li> </ul> <p>I also experimented with adding a variety of frequency domain features that were calculated using wavelet transforms but that didn't help.</p> <h1>Training data augmentation</h1> <ul> <li><strong>Random low pass filtering:</strong><ul> <li>Frequency cutoff was 5% - 37.5% the sample rate</li> <li>Applied to half the training sequences</li></ul></li> <li><strong>Random time warp:</strong><ul> <li>Used linear interpolation to change the sequence length by +/- 10% (or any value in between; the scale was sampled from a uniform distribution)</li> <li>Applied to half the training sequences</li></ul></li> <li><strong>Random flip:</strong><ul> <li>Multiplied AccML by -1 to reverse right &amp; left</li> <li>Applied to half the training sequences</li></ul></li> <li><strong>Random magnitude warping:</strong><ul> <li>The difference between each acceleration feature's value and its mean value was multiplied by a coefficient randomly sampled from a gaussian distribution with a mean of 0 and a standard deviation of 0.1</li> <li>Applied to half the training sequences</li></ul></li> <li><strong>Noisy time features:</strong><ul> <li>Normalized times within each context window shifted by value sampled from gaussian distribution with mean of 0 and standard deviation of 0.05</li> <li>Applied before calculating SinNormalizedTime (so the same noise impacts both features).</li> <li>Applied to ALL the training sequences</li></ul></li> </ul> <h1>Inference time data augmentation</h1> <p>Each sample was classified 16 times by each model.</p> <ul> <li>With and without multiplying AccML by -1 to reverse right &amp; left</li> <li>Sequences were classified in overlapping context windows with a stride equal to 1/8 the window length. Similar to random crop data augmentation.</li> </ul> <p>The values saved to the submission file were the simple mean of all predictions from all models.</p> <h1>Handling defog vs tdcsfog</h1> <p>I used the same models for both datasets. I did not do anything to normalize the sample rates or feature values. I did not even convert the features to have the same units. Normalization seemed to be harmful.</p> <h1>Ensembling / random seed hacking</h1> <p>I used 2 near-identical models that were trained with identical hyperparameters from the same cross-validation fold, but with different random seeds for weight initialization &amp; shuffling the training data. They were filtered to have mAP scores in the top 20% of my local cross validation and top 50% of my LB scores. This probably improved my score by around 0.01 - 0.02 vs. just using 2 random models.</p> <p><strong>Inference notebook:</strong> <a href="https://www.kaggle.com/jsday96/parkinsons-overlapping-se-unet-frequency-domain" target="_blank">https://www.kaggle.com/jsday96/parkinsons-overlapping-se-unet-frequency-domain</a></p>
Parkinson's Freezing of Gait Prediction
12th place solution: Simple features and LSTM.
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Parkinson's Freezing of Gait Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I would like to thank host and kaggle for organizing such interesting competition.<br> I believe that the method proposed by the winner and prizewinners who received high scores in both the public and private tests will be helpful to host.</p> <h2>Summary</h2> <ul> <li>Ensamble of 4 model based by LSTM</li> <li>Use 4 features. mean, std, max, min, median.</li> <li>Notype data is used as validation data.</li> <li>Selected model separately for tDCS FOG and DeFOG.</li> </ul> <h2>Data Preprocessing</h2> <p>I compressed all data into fixed length. Length is 2048 because shortest training data is 2359.</p> <p>Compression Method</p> <ol> <li>Expand the original data to a smallest multiple of the target length that exceeds the original data length. (ch, target_size * n)</li> <li>Fold the data into (ch, target_size, n).</li> </ol> <p>As feature, I used the mean, std, max, min, and median within this N.<br> In addition to this, it was also useful to add each percentile point (15~90 percentile).<br> I rarely used the information on each ID and Subjects, but only "Visit" was used as a label for multitasking.</p> <pre><code> (): ch = x.shape[] input_size = x.shape[] pad = target_size - input_size % target_size factor = (input_size + pad) / input_size x = np.array([ndi.zoom(xi, zoom=factor, mode=) xi x]) x = x.reshape((ch, target_size, -)) res = {} res[] = np.mean(x, axis=).reshape(ch, -) res[] = np.(x, axis=).reshape(ch, -) res[] = np.(x, axis=).reshape(ch, -) res[] = np.median(x, axis=).reshape(ch, -) res[] = np.sqrt(np.var(x, axis=).reshape(ch, -)) use_percentile_feat: p [, , , , , ]: res[] = np.percentile(x, [p], axis=).reshape(ch, -) res </code></pre> <h2>Model</h2> <p>The model used was LSTM: a simple 4-block model and a model that were <a href="https://www.kaggle.com/competitions/ventilator-pressure-prediction/discussion/285330" target="_blank">3rd place of similar competitions in the past</a>. For DeFOG, I also used <a href="https://www.kaggle.com/code/cdeotte/tensorflow-transformer-0-112" target="_blank">Transformer</a>.</p> <h2>CV Strategy</h2> <ul> <li><p>tDCS FOG<br> In tDCS FOG, I used 4fold StratifiedGroupKfold with Event label and Subject as group.<br> However, CV was quite high for certain folds.<br> This is probably due to the large number of examples of difficult classes such as StartHesitation and Walking appearing for a long time in Subejt "2d57c2", which reduced the percentage of False Positives.<br> Therefore, I had selected a model based on a score that excluded this Subject.<br> Personally, I think that the reason why Shake occurred was also due to the existence of such a Subject in the Public data, and that many teams overfit to it.</p></li> <li><p>DeFOG<br> Because DeFOG has few labeled data, I used all labeled data as training data.<br> Since there was a correlation between the loss of the three labels and the loss of the Event label only, I used Nototype as the validation data.<br> Although it was not possible to make a final selection, I trained on all data including tdcsfog, and the model selected based on the Notype data was the my best private score(<a href="https://www.kaggle.com/code/mrt0933/keras-lstm-trained-by-all-labeled-data" target="_blank">this notebook</a>).</p></li> </ul> <h2>Final submission</h2> <p>I selected 4 models for each of tdcsfog and defog and ensemble them.<br> For tDCS FOG, I chose the one with the highest cv for each fold.<br> The best public score had a high enough cv, so I selected this score as the final submission (I'm glad I didn't get caught in the shaking).</p> <ul> <li>tDCS FOG. CV:0.28</li> </ul> <table> <thead> <tr> <th>Fold</th> <th>Model</th> <th>CV</th> </tr> </thead> <tbody> <tr> <td>0 (w/o "2d57c2”)</td> <td>LSTM (multitasking)</td> <td>0.179</td> </tr> <tr> <td>1</td> <td>LSTM (1d CNN head)</td> <td>0.337</td> </tr> <tr> <td>2</td> <td>LSTM (add percentile features)</td> <td>0.278</td> </tr> <tr> <td>3</td> <td>LSTM (1d CNN head)</td> <td>0.293</td> </tr> </tbody> </table> <ul> <li>DeFOG. CV: 0.381(Event)</li> </ul> <table> <thead> <tr> <th>No</th> <th>Model</th> <th>CV</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>LSTM</td> <td>0.292</td> </tr> <tr> <td>1</td> <td>LSTM (add percentile features)</td> <td>0.318</td> </tr> <tr> <td>2</td> <td>LSTM (long 4096)</td> <td>0.286</td> </tr> <tr> <td>3</td> <td>Transformer</td> <td>0.286</td> </tr> </tbody> </table> <p>Public LB: 0.444<br> Private LB: 0.341</p> <p>Finally, I am happy to become a kaggle master with this gold medal.<br> However, although I got a gold medal, I am disappointed that I could not design a model that could capture FOG events well both quantitatively and qualitatively.<br> I would love to participate in the second and third competitions if they are held. Thank you very much.</p>
BirdCLEF 2023
763rd Place Solution for the BirdCLEF 2023 Competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2023 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><strong>Context</strong></p> <ul> <li>Business context (use case) - <a href="https://www.kaggle.com/competitions/birdclef-2023/overview" target="_blank">https://www.kaggle.com/competitions/birdclef-2023/overview</a></li> <li>Data context - <a href="https://www.kaggle.com/competitions/birdclef-2023/data" target="_blank">https://www.kaggle.com/competitions/birdclef-2023/data</a><br> <strong>Overview of the Approach</strong><br> I have named this approach the Reading the Robot Mind system. Specifically, an interactive Jupyter notebook [1] is built for the purpose of training and deploying a deep learning neural network artificial intelligence (AI) [2] to automatically identify birds from recorded audio [3]. The notebook allows the user to modify parameters along the training and classification (inference) pipeline and observe the results. As with traditional observation methods, the notebook lets users view visual representations (spectrograms, etc.) of input vectors for similar and different birds [4]. In addition to traditional methods, this notebook also presents data in its original format (audio recordings of birds). This is common practice for a field researcher or subject matter expert (SME) testing a microphone and recording system [5]- they will want to listen to the recordings to see if they contain valid and sufficient information. The notebook [6] extends this intuitive and useful technique to individual neural network layers - working backwards towards a best estimate of the original input (referred to in this working note as "reading the robot mind"). The user can even provide just the "answer" (select a bird at the final output layer), and the reading the robot mind system will work backwards through the entire automated process and AI layers to let the SME hear a best approximation of what the AI has learned that bird sounds like.<br> <strong>Details of the Submission</strong><br> Note that due to the time and compute limitations imposed by the computing environment and contest rules provided, the notebook is divided into four public notebooks:</li> <li><a href="https://www.kaggle.com/code/pnussbaum/v15h-birdclef2023-mindreader" target="_blank">https://www.kaggle.com/code/pnussbaum/v15h-birdclef2023-mindreader</a> - This notebook focuses on the Segmentation and Feature Extraction aspects of the AI solution, allowing users to make modifications and see and hear how much information is retained.</li> <li><a href="https://www.kaggle.com/code/pnussbaum/v15h-all-birdclef2023-mindreader" target="_blank">https://www.kaggle.com/code/pnussbaum/v15h-all-birdclef2023-mindreader</a> - This notebook allows the user to use their final decision related to segmentation and feature extraction, and convert and save all the BirdClef2023 data into this format.</li> <li><a href="https://www.kaggle.com/code/pnussbaum/v16e-gpu-all-birdclef2023-mindreader" target="_blank">https://www.kaggle.com/code/pnussbaum/v16e-gpu-all-birdclef2023-mindreader</a> - This notebook uses the final decisions noted above, and trains the entire AI for a longer period of time, achieving better accuracy, and saving the trained AI system.</li> <li><a href="https://www.kaggle.com/code/pnussbaum/v17b-all-birdclef2023-mindreader" target="_blank">https://www.kaggle.com/code/pnussbaum/v17b-all-birdclef2023-mindreader</a> - This notebook brings all of this together for the sake of the contest submission and scoring.</li> </ul> <p>In the following sections, segmentation, feature extraction, quantization, AI model creation, training, and validation are discussed in detail. Also shown with formulas and examples are the aspects of the reading the robot mind system, including visualization of filters, recreation of input approximations based on outputs of intermediate and final layers of the AI system, and also the method whereby the output can be forced to an individual bird, and a best approximation of what that bird sounds like is created by the system.</p> <p><em>Segmentation, Feature Extraction, and Image Quantization Analysis</em><br> An automated segmentation algorithm is used, however, due to the rules of the BirdCLEF 2023 competition, the SME is not permitted to modify this algorithm [3]. After segmentation, the audio data is transformed into another domain (called feature extraction in this document), and finally saved as a two-dimensional grayscale image with 8-bit quantization. The quantization was chosen due to the power and memory constraints of the edge device that will be performing inference (bird classification) in the field.<br> The notebook allows the SME to try several Feature Extraction Algorithms and test the following:</p> <ul> <li>Do the features visually look similar for the same bird, and different for different birds?</li> <li>Is the similarity/difference enough to be able to visually classify which bird is which?</li> <li>If the feature extraction algorithm is performed in reverse to recreate the inputted audio (or an approximation thereof, due to the lossy nature of feature extraction) - is the recreated sound clear enough for the SME to identify the bird?</li> <li>If the answer is "no" to any of the above, allow fine tuning by the user</li> </ul> <p><em>Layer-wise Analysis of the AI (Convolutional, Max Pooling, and Dense Layers)</em><br> A simple sequential (not residual) convolutional AI is used, with dimensions and architecture shown in the aforementioned notebooks. This was found to yield a satisfactory accuracy measure for the purposes of demonstrating the reading the robot mind system, while still falling within the running time and memory constraints imposed by the development platform and contest rules.</p> <p><em>Visualizing Filter Patches</em><br> The notebook allows visualization of the convolutional filter patches. Although this information is more useful to the AI programmer than it is to the SME, it is presented here since it is a step in the system of reading the robot mind. The algorithms used are similar to the "expansion" method [12].</p> <p><em>Using Filter Patches to Reconstruct Approximation of Input (for Convolutional and Max Pooling layers)</em><br> Using convolutional neural networks to extract patterns from image data necessarily involves loss of information. Each convolutional layer has only a limited number of filters. Even though that limited number of filters is greater than would be needed to form an ortho-normal basis set (each filter having all 0 weights except for a weight of one at a different individual pixel), the AI is not being trained as an auto-encoder, and is instead being trained to minimize the loss when classifying birds. Also, information is being lost through the use of the "relu" activation function, which makes all negative output values a 0. Most of all, information is being lost at the max pooling layers.</p> <p><em>Working Backwards from a Bird Classification (What Does This Bird Sound Like?)</em><br> The reading the robot mind system also allows one more function that can prove useful to the SME who is helping the AI programmer improve the system. This function is the ability to specify a particular output (bird classification) and work backwards through the entire AI to recreate an approximation of the original audio input, even when no input is provided.</p> <p><em>Conclusions</em><br> The reading the robot mind system is implemented, allowing the SME to observe and qualitatively analyze the internal data flow of deep learning neural networks in a format familiar to them.<br> With this, the SME can understand where in the pipeline the most information is being discarded by the AI system, and possibly help the programmer make improvements in future systems.</p> <p><strong>Sources</strong><br> [1]<br> J. Perkel, "Why Jupyter is data scientists' computational notebook of choice," Nature, vol. 563.7732, no. (2018), pp. 145-147, 2018.<br> [2]<br> IBM, "AI vs. Machine Learning vs. Deep Learning v. Neural Networks: What's the Difference?," 2023. [Online]. Available: <a href="https://www.ibm.com/cloud/blog/ai-vs-machine-learning-vs-deep-learning-vs-neural-networks" target="_blank">https://www.ibm.com/cloud/blog/ai-vs-machine-learning-vs-deep-learning-vs-neural-networks</a>.<br> [3]<br> S. D. T. K. H. R. H. C. F. G. H. G. H. V. W. P. R. J. A. Kahl, "Overview of BirdCLEF 2023: Automated bird species identification in Eastern Africa.," Working Notes of CLEF 2023 – Conference and Labs of the Evaluation Forum, 2023.<br> [4]<br> Audibon Society, "Start using Spectrograms to Read Bird Songs and Calls," 2023. [Online]. Available: <a href="https://www.audubon.org/news/start-using-spectrograms-read-bird-songs-and-calls" target="_blank">https://www.audubon.org/news/start-using-spectrograms-read-bird-songs-and-calls</a>.<br> [5]<br> Acoustic Nature, "Best way to record birdsong: Gear guide and tips," 2020. [Online]. Available: <a href="https://www.audubon.org/news/start-using-spectrograms-read-bird-songs-and-calls" target="_blank">https://www.audubon.org/news/start-using-spectrograms-read-bird-songs-and-calls</a>. [Accessed 2023].<br> [6]<br> P. Nussbaum, "v15h BirdClef2023 Mindreader," 2023. [Online]. Available: <a href="https://www.kaggle.com/code/pnussbaum/v15h-birdclef2023-mindreader" target="_blank">https://www.kaggle.com/code/pnussbaum/v15h-birdclef2023-mindreader</a>.<br> [7]<br> C. B. L. P. S. K. H. G. B. D. D. M. J. E. C. L. T. L. R. C. M. Š. M. H. M. S. H. G. R. P. W.-P. V. H. K. T. D. I. E. P. B. H. M. A. Joly, "Overview of LifeCLEF 2023: evaluation of ai models for the identification and prediction of birds, plants, snakes and fungi," in International Conference of the Cross-Language Evaluation Forum for European Languages, 2023.<br> [8]<br> J. M. K. M. S. a. M. P. W. Wu, "Explainable AI for Early Detection of Health Changes Via Streaming Clustering," in 2022 IEEE International Conference on Fuzzy Systems, Padua, 2022.<br> [9]<br> S. P. a. S. A. S. Sutthithatip, "(Explainable) Artificial Intelligence in Aerospace Safety-Critical Systems," in IEEE Aerospace Conference, Big Sky. MT, 2022.<br> [10]<br> Librosa, "librosa.feature.melspectrogram," 2023. [Online]. Available: <a href="https://librosa.org/doc/main/generated/librosa.feature.melspectrogram.html" target="_blank">https://librosa.org/doc/main/generated/librosa.feature.melspectrogram.html</a>. [Accessed 2023].<br> [11]<br> Librosa, "librosa.feature.mfcc," 2023. [Online]. Available: <a href="https://librosa.org/doc/main/generated/librosa.feature.mfcc.html" target="_blank">https://librosa.org/doc/main/generated/librosa.feature.mfcc.html</a>. [Accessed 2023].<br> [12]<br> e. a. Voss, "Visualizing Weights," Distill, vol. 10.23915/distill.00024.007, no. 10.23915/distill.00024.007, p. 10.23915/distill.00024.007, 2021.<br> [13]<br> Google, 2023. [Online]. Available: <a href="https://distill.pub/2020/circuits/visualizing-weights/" target="_blank">https://distill.pub/2020/circuits/visualizing-weights/</a>.</p>
BirdCLEF 2023
1st place solution: Correct Data is All You Need
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2023 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi, Kagglers!</p> <p>Let's start our journey in the tricky world of Audio Bird Data and Modelling, but before this, a few very important words:</p> <p><em>I would like to thank the Armed Forces of Ukraine, Security Service of Ukraine, Defence Intelligence of Ukraine, and the State Emergency Service of Ukraine for providing safety and security to participate in this great competition, complete this work, and help science, technology, and business not to stop but to move forward.</em></p> <h1>If You Only Knew the Power of A100s GPUs</h1> <p>I have managed to run 294 experiments: half of them with 5 folds and half of them with full data training. So, all in all, many hypotheses were checked and, of course, most of them were rejected :) So let's take a look.</p> <h1>Data, Data is Everywhere</h1> <h2>Let's Start from 2023 Training Data</h2> <p>If you take a look at <code>train_metadata["primary_label"].value_counts()</code>, you may notice some strange maximum magic number: </p> <pre><code> </code></pre> <p>Why do we have a maximum of 500 representatives of some species? I do not know the 100% answer, but I have a strong hypothesis - a bug in <a href="https://github.com/ntivirikin/xeno-canto-py" target="_blank">XC API</a>. I cannot remember the exact place in the code, but the overall problem lies in the data loading pipeline. Here's how it works:</p> <ol> <li>Download meta files - json files.</li> <li>Iterate over all urls in the meta file(s) and download them.</li> </ol> <p>BUT if you have more than 500 files for one species - on the first stage, you will have several json files (maximum number of files in one json metafile = 500), and here we have the problem! On the second stage, the API takes into account only one json for each species and ignores the next ones, so you will have a maximum of 500 files per species.</p> <p>NOTE: I am not sure whether it is fixed in the latest version of the API, but I have used a commit from the previous year, and it was there.</p> <p>From this bug, we can clearly understand that using the fixed API can hugely enrich our training dataset.</p> <h2>Other more boring stuff</h2> <ul> <li>2023/2022/2021/2020 competition data</li> <li><a href="https://www.kaggle.com/competitions/birdclef-2023/discussion/398318" target="_blank">2020 additional competition data</a></li> <li><a href="https://www.kaggle.com/competitions/birdclef-2023/discussion/394358#2179605" target="_blank">Zenodo</a></li> <li>Xeno-Canto </li> </ul> <h2>Data Preparation</h2> <h3>Training Data</h3> <p>In order to make validation more robust:</p> <ul> <li>Split samples of species with only one representative into 2 splits. This is done in order to have at least one CV split with each species in train AND val splits.</li> <li>Remove some duplicates manually.</li> <li>Remove duplicates by the next rule: Two samples have same: duration, author, primary_label.</li> </ul> <h3>Additional training data</h3> <p>From the 2023/2022/2021/2020 competition data plus Xeno-Canto data, I have selected only files with this year's primary labels and added them to the final stage of training.</p> <h3>Pretrained Dataset</h3> <p>When I was using only 2023 training data in the final stage of training, pretraining on 2022/2021/2020 competition data boosted the score a lot. But after adding additional training data, pretraining stopped working on the leaderboard (though it still increased local validation). In the last week, I decided to return to pretraining experiments. This granted me one position up in the public leaderboard and two positions up in the private leaderboard - so, Kagglers, don't forget to revisit even rejected hypotheses :) </p> <p>Why and when did it work? Compared to previous pretraining experiments, I have:</p> <ul> <li>Filtered out 2023 train data duplicates not only by id but also by 'author + primary_label', as was suggested <a href="https://www.kaggle.com/competitions/birdclef-2023/discussion/395843" target="_blank">here</a></li> <li>Taken species that are present in 2023/2022/2021/2020 competition data + 2020 additional competition data and only if there are more than 10 representatives of the species. Overall, 822 species. </li> <li>Added additional files for selected species from Xeno Canto.</li> </ul> <h3>Zenodo</h3> <p>I have selected nocall regions and used them as background augmentation.</p> <h3>Data Experiments That Did Not Work</h3> <ul> <li>Massive pretraining on all Xeno Canto data.</li> <li><a href="https://www.kaggle.com/datasets/christofhenkel/birdclef2021-background-noise" target="_blank">Background noise from 2021 2nd place</a> as Background augmentation </li> <li><a href="https://www.kaggle.com/datasets/mmoreaux/environmental-sound-classification-50" target="_blank">ESC50</a> as Background augmentation </li> <li>Selecting only High Quality samples (&gt;=32kHz) from additional data </li> <li>Maybe some other ideas out of 200+ experiments that I have just forgotten</li> </ul> <h1>Validation: Be soft like cmAP, Do not be hard like F1</h1> <p>Finally! We do not have to select a threshold on completely different training data compared to soundscape data, come up with super <a href="https://www.kaggle.com/competitions/birdclef-2021/discussion/243463" target="_blank">sophisticated schemes</a> or fall 19 places (as I did in 2021)</p> <p>I have used pretty much the same validation scheme as in previous years' competitions:</p> <ul> <li>Stratified CV on 5 Folds</li> <li>Take max prob from each 5 second clip over time across ALL sample</li> </ul> <p>IMPORTANT: For Padded cmAP it is pretty important to take mean across folds, NOT to do Out Of Fold !!! </p> <p>Of course, absolute numbers of CV and LB are different:</p> <ul> <li>Best Public LB: 0.84444 (4 fours :) )</li> <li>Best Private LB: 0.76392</li> <li>Best CV: 0.9083368282233681 </li> </ul> <p>But the rank correlation was pretty good. CV improvement in 0.0x (and more) resulted in improvement on LB. I have nearly all CV results for my experiments, so I hope I will have time to publish a paper with a detailed ablation study and a CV-LB correlation study.</p> <h1>Training</h1> <p>I have taken a look at <a href="https://www.kaggle.com/philippsinger" target="_blank">@philippsinger</a> <a href="https://www.youtube.com/watch?v=NCGkBseUSdM" target="_blank">presentation</a> and understood how strongly I was overfitting all the time.</p> <p>Due to time and device constraints, I have chosen the following scheme:</p> <ol> <li>Validate the hypothesis on CV and submit the first 2-3 folds.</li> <li>For ensembling retrain on full train data, so you have one model for each setup</li> </ol> <p>Training Details:</p> <ul> <li>50 Epochs</li> <li>Adam</li> <li>CosineAnnealing from 1e-4 (or 1e-3) to 1e-6</li> <li>Focal loss </li> <li>64 BS</li> <li>5 second chunk </li> <li>SUPER IMPORTANT: Class sampling weights</li> </ul> <pre><code>sample_weights = ( .value_counts() / all_primary_labels.value_counts().sum() ) ** (.) </code></pre> <ul> <li>Same setups for pretrain and finetune</li> </ul> <p>Stages:</p> <ol> <li>Pretrain - refer to <code>Pretrained Dataset</code></li> <li>Tune only on scored species </li> </ol> <h1>Model</h1> <p>Because of computational constraints, we couldn't use the golden rule of Deep Learning: Stack More Layers!</p> <p>So I have dived a bit in inference optimization techniques:</p> <ul> <li>ONNX - this worked pretty well for me. It improved the inference time slightly and allowed me to reduce the number of custom dependencies in the inference notebook.</li> <li>Quantization - I spent more than a week experimenting with it, but unfortunately, I had no success :( </li> <li>openvino - I didn't use or try this, I just read about it the <a href="https://www.kaggle.com/competitions/birdclef-2023/discussion/412707" target="_blank">2nd place description</a> and burnt my chair </li> </ul> <p>Overall, my final submission is an ensemble of 3 Sound Event Detection (SED) models with the following backbones:</p> <ul> <li>eca_nfnet_l0 (2 stages training; Start LR 1e-3)</li> <li>convnext_small_fb_in22k_ft_in1k_384 (2 stages training; Start LR 1e-4)</li> <li>convnextv2_tiny_fcmae_ft_in22k_in1k_384 (1 stage training; Start LR 1e-4)</li> </ul> <p>It was pretty important to tweak the starting learning rate for different architectures!!!</p> <h1>Augmentations</h1> <p>I was pretty picky about augmentation selection, so my final models used next ones:</p> <ul> <li>Mixup : Simply OR Mixup with Prob = 0.5</li> <li>BackgroundNoise with Zenodo nocall</li> <li>RandomFiltering - a custom augmentation: in simple terms, it's a simplified random Equalizer</li> <li>Spec Aug: <ul> <li>Freq: <ul> <li>Max length: 10</li> <li>Max lines: 3</li> <li>Probability: 0.3</li></ul></li> <li>Time:<ul> <li>Max length: 20</li> <li>Max lines: 3</li> <li>Probability: 0.3</li></ul></li></ul></li> </ul> <h1>Small inference tricks</h1> <ul> <li>Using temperature mean: <code>pred = (pred**2).mean(axis=0) ** 0.5</code></li> <li>Using Attention SED probs * 0.75 + Max Timewise probs * 0.25</li> </ul> <p>All these gave marginal improvements but it is was a matter of first 3 places :) </p> <h1>Other stuff that created a carbon footprint but did not improve my LB score</h1> <p>This section will be far from complete but let's add something that I have in mind now:</p> <ul> <li><a href="https://www.kaggle.com/competitions/birdclef-2023/discussion/412707" target="_blank">2021 2nd place model</a>. I have tried (like I did in 2022) but unfortunately it did not work for me </li> <li>Pretrain on whole Xeno Canto</li> <li>Train on larger chunks. The same result occurred if I inferred on smaller chunks or on same length chunks</li> <li>Colored Noise augmentations </li> <li>CQT or <a href="https://github.com/denfed/leaf-audio-pytorch" target="_blank">LEAF</a></li> <li>Specific finetuning: smaller LR, smaller number of epochs, freeze backbone, different LRs for backbone and head</li> <li>Loss on Attention SED probs + Loss on Max Timewise probs</li> <li>Deep Supervision </li> <li>Different <code>alpha</code> for MixUp</li> <li>Transformer architectures. For example <a href="https://speechbrain.readthedocs.io/en/latest/API/speechbrain.lobes.models.ECAPA_TDNN.html" target="_blank">ECAPA TDNN</a> </li> </ul> <h1>Closing words</h1> <p>I hope you have not fallen asleep while reading. Finally, I want to thank the entire Kaggle community, congratulate all participants and winners.<br> Special thanks to Cornell Lab of Ornithology, LifeCLEF, Google Research, Xeno-canto, <a href="https://www.kaggle.com/stefankahl" target="_blank">@stefankahl</a>, <a href="https://www.kaggle.com/tomdenton" target="_blank">@tomdenton</a>, <a href="https://www.kaggle.com/holgerklinck" target="_blank">@holgerklinck</a>. All of you were super active in discussions, shared datasets and interesting materials, answered all questions, and of course, prepared such a cool competition!"</p> <h1>Resources</h1> <p><strong>Inference Kernel</strong> : <a href="https://www.kaggle.com/code/vladimirsydor/bird-clef-2023-inference-v1/notebook" target="_blank">https://www.kaggle.com/code/vladimirsydor/bird-clef-2023-inference-v1/notebook</a><br> <strong>GitHub</strong> : <a href="https://github.com/VSydorskyy/BirdCLEF_2023_1st_place" target="_blank">https://github.com/VSydorskyy/BirdCLEF_2023_1st_place</a><br> <strong>Paper</strong> : TBD</p>
IceCube - Neutrinos in Deep Ice
20th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IceCube - Neutrinos in Deep Ice <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Team members are junseonglee11 (@junseonglee11), Ayaan Jang(@ayaanjang). <br> We ensembled 6 LSTM models (2 different versions).<br> We modified Robin Smith's and Robert Hatch's notebooks. </p> <h1><strong>Our notebooks:</strong></h1> <p>Inference: <a href="https://www.kaggle.com/code/ayaanjang/20th-tensorflow-lstm-model-inference-merged" target="_blank">https://www.kaggle.com/code/ayaanjang/20th-tensorflow-lstm-model-inference-merged</a><br> Train: <a href="https://www.kaggle.com/code/junseonglee11/20th-tensorflow-tfrecord-tpu-lstm-line-fit-train" target="_blank">https://www.kaggle.com/code/junseonglee11/20th-tensorflow-tfrecord-tpu-lstm-line-fit-train</a><br> Dataset (TFRecord): <a href="https://www.kaggle.com/code/junseonglee11/icecube-data-to-tfrecord-v2-1" target="_blank">https://www.kaggle.com/code/junseonglee11/icecube-data-to-tfrecord-v2-1</a> </p> <h1><strong>References</strong></h1> <p>Robin Smith's: notebooks:<br> <a href="https://www.kaggle.com/code/rsmits/tensorflow-lstm-model-inference" target="_blank">https://www.kaggle.com/code/rsmits/tensorflow-lstm-model-inference</a><br> <a href="https://www.kaggle.com/code/rsmits/tensorflow-lstm-model-training-tpu" target="_blank">https://www.kaggle.com/code/rsmits/tensorflow-lstm-model-training-tpu</a><br> <a href="https://www.kaggle.com/code/rsmits/tensorflow-lstm-model-data-preprocessor/notebook" target="_blank">https://www.kaggle.com/code/rsmits/tensorflow-lstm-model-data-preprocessor/notebook</a> I modified his notebook </p> <p>Robert Hatch's notebook<br> <a href="https://www.kaggle.com/code/roberthatch/lb-1-183-lightning-fast-baseline-with-polars" target="_blank">https://www.kaggle.com/code/roberthatch/lb-1-183-lightning-fast-baseline-with-polars</a><br> It was crucial to improving our score. Used the results of this notebook as additional inputs in our model.</p> <p>Seungmoklee's notebook<br> <a href="https://www.kaggle.com/code/seungmoklee/lstm-preprocessing-point-picker" target="_blank">https://www.kaggle.com/code/seungmoklee/lstm-preprocessing-point-picker</a></p> <h1><strong>Data preprocessing</strong></h1> <p><strong>Our code:</strong> [<a href="https://www.kaggle.com/code/junseonglee11/icecube-data-to-tfrecord-v2-1" target="_blank">https://www.kaggle.com/code/junseonglee11/icecube-data-to-tfrecord-v2-1</a>]</p> <ol> <li><p>Reference part: Preprocessed data to generate 96 time-series data with 6 features including:</p> <ul> <li>Sensor signal measurement time</li> <li>Sensor signal strength</li> <li>Sensor signal quality</li> <li>X, Y, Z coordinates of received sensor (3 features)</li></ul></li> <li><p>Performed feature engineering on the 6 features to improve the prediction accuracy of the RNN (Residual Neural Network) model (trained on 90 data files instead of the entire dataset, then experimented with feature transformation)</p> <ul> <li><strong>Using original features: LB 1.015</strong></li> <li><strong>Adding Time Difference: LB 1.0128</strong></li> <li><strong>Replaced sensor signal measurement time with time interval (time difference between next and current measurement time, Time diff):</strong> <br> The difference in sensor position between current and next time points has different meanings depending on the time interval (for example, <br> moving 10m in 1 second vs. 1m in 10 seconds has a 10-fold speed difference). However, the absolute measurement time of the sensor signal <br> can not reflect this, so it was determined that time interval is a more appropriate input than the measurement time.</li> <li><strong>Adding Coordinate Difference: LB 1.0115</strong> <br> <strong>Added three features with the difference in X, Y, Z coordinates between the next and current sensor positions:</strong> Similar to the time <br> interval feature, it was determined that the difference in coordinate values between time points can better reflect the direction information of <br> the neutral particle. However, when replacing the sensor's xyz coordinates with coordinate difference values, the accuracy decreased. This is <br> because the difference in sensor coordinate values between non-adjacent time points has important information.</li></ul></li> <li><p>Convert all the inputs above to TensorFlow TFRecord format to minimize CPU memory usage and accelerate training.</p></li> </ol> <h1><strong>Model training and inference</strong></h1> <p><strong>Our code (model):</strong> <a href="https://www.kaggle.com/code/junseonglee11/20th-tensorflow-tfrecord-tpu-lstm-line-fit-train" target="_blank">https://www.kaggle.com/code/junseonglee11/20th-tensorflow-tfrecord-tpu-lstm-line-fit-train</a><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F7429668%2Feea4665a122c77a5c2e3aa60a38e65b1%2F.png?generation=1683061023145992&amp;alt=media" alt=""></p> <ul> <li><strong>Model Overview</strong>: Bidirectional LSTM (long short-term memory) 7 layer+ batch normalization + Line fitting features concatenation + Dense layer</li> <li><strong>Activation Function</strong>: GELU(Gaussian Error Linear Unit) activation layer</li> <li><strong>Loss Function</strong>: Sparse_categorical_crossentropy loss</li> <li><strong>Training Metric</strong>: Accuracy</li> <li><strong>Optimizer:</strong> RAdam (Rectified Adam) optimizer</li> <li><strong>Ensemble (2 versions of the model)</strong></li> <li>Model trained on fold 0 of the dataset divided into 10 folds<ul> <li>Train: 594 dataset, Valid: 66 dataset: model trained using only 1 validation dataset</li> <li>Train: 659 dataset, Valid: 1 dataset: four models with the lowest Mean Angular Error values are selected from each version of the model (total of 8 models)</li> <li>Optimal ensemble weights are determined through cross-validation</li></ul></li> </ul> <h1><strong>Data postprocessing</strong></h1> <p><strong>Our code (Data postprocessing):</strong> <a href="https://www.kaggle.com/code/ayaanjang/20th-tensorflow-lstm-model-inference-merged" target="_blank">https://www.kaggle.com/code/ayaanjang/20th-tensorflow-lstm-model-inference-merged</a><br> <strong>Original code:</strong> Power value not squared in the code below.<br> Perform a weighted average of predicted probabilities for each category and the direction of the particle represented by each category to <br> calculate the azimuth and zenith directions of a neutron.<br> <strong>Changed code:</strong> Add a square of the power value to the predicted value obtained from the model.<br> It's not optimal to simply derive the results based on the direction for each category and the predicted model probability.<br> Attempt various modifications to the predicted probability using exponential, logarithmic functions, activation functions, etc. to improve the <br> post-processing stage.<br> <strong>Improvement:</strong> When the model-predicted category probabilities were squared by 1.35, there was a decrease of about 0.002 in mean angular error.<br> → This gave appropriate additional weight to the category with a high probability in the predicted result.</p>
AMP®-Parkinson's Disease Progression Prediction
13th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: AMP®-Parkinson's Disease Progression Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, congratulations to the winners!</p> <p>My solution is based on LGB models, where I built a separate model for each forecast horizon (0, 6, 12, 24) and target (updrs_1, updrs_2, updrs_3, updrs_4).</p> <p>The key variables I utilized for all targets were:</p> <ul> <li>"visit_month"</li> <li>"num_visits": Number of visits the patient had before the visit_month</li> <li>"relation" : Relation between these two variables </li> </ul> <p>The objective function was MAE. </p> <p>With these features, I achieved:<br> <strong>CV ~ 54.57 (public score: 54.5, private score: 60.3)</strong></p> <p>It is worth to say that about 70% of my submissions outperformed the two I eventually selected. (Luckily I got the gold)</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3147995%2Fa8411134509a98d73b5cedf05178a57c%2FCaptura%20desde%202023-05-19%2009-28-32.png?generation=1684482717080185&amp;alt=media" alt=""></p> <p><strong>FEATURE SELECTION</strong></p> <p>I performed an analysis to identify proteins that enhanced my local validation, particularly for updrs_1, updrs_2 and updrs_3. This was done by running multiple fold divisions and taking the average of CV to mitigate randomness.</p> <p><strong>OTHERS THINGS</strong></p> <ul> <li>I introduced random noise to the protein NPX values to prevent overfitting. </li> <li>I randomly set 15% of the protein NPX values to Null, also as a precaution against overfitting.</li> <li>I did target transformation for updrs_3 and updrs_4 (np.log1p/np.expm1)</li> </ul> <p>With all this <strong>My best local validation score was 53.46 (public score: 54.8, private score: 60.9).</strong></p> <p><strong>THINGS THAT DIDNT WORK FOR ME</strong></p> <ul> <li>Remove outliers for training</li> <li>Customize objective function</li> <li>Ensembling with different models</li> <li>Predict trend and then predict residuals</li> </ul> <p><strong>Lessons learned</strong></p> <ul> <li>I should have chosen one submission based on the public LB and another based on CV, instead of both relying solely on CV. </li> <li>Next time, I will dedicate more time to conducting a deeper Exploratory Data Analysis (EDA)./</li> </ul>
Stable Diffusion - Image to Prompts
36th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Stable Diffusion - Image to Prompts <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>No shakeup. Congrats to all the winners!!</p> <p>This is my <strong>second solo silver</strong>. And I'm finally becoming a <strong>Kaggle competition Master</strong> after the result finalises!!!</p> <hr> <p>My final ensemble consists of fine-tuned models and zero-shot models.</p> <p><strong>TLDR:</strong> KNN regression + CLIP interrogator + an ensemble of 4 fine-tuned models, with manual weights of 0.3, 0.1 and 0.6 respectively. The submission notebook can be found <a href="https://www.kaggle.com/code/xfffrank/sd-clip-knnregression-vit-interrogator/notebook" target="_blank">here</a>.</p> <p>The details are explained as below.</p> <h2>Fine-tuned models</h2> <h3>Data preprocessing</h3> <p>For this part, or even the whole competition, <strong>data preprocessing and the diversity of datasets</strong> are two of the most important points. For the training of each single model, I used texts from Diffusion-2M, Diffusion-14M, the public 900k and 80k datasets.</p> <p>There are two reasons why preprocessing is necessary:</p> <ol> <li>Filter high-correlation data so that the model is not easy to overfit.</li> <li>Reduce the size of training set so that the training cost is acceptable.</li> </ol> <p>For each dataset, the preprocessing follows this pipeline:</p> <ol> <li>Remove duplicates according to existing dataset.</li> <li>Filter by hand-crafted rules.<ol> <li>Remove texts with num_of_words &lt; 5.</li> <li>Remove duplicates for texts that have the same 4 starting/ending words.</li> <li>Remove texts that have non-English characters.</li> <li>Remove texts that have more than 77 tokens, using the same CLIPTokenizer as Stable Diffusion v2.</li></ol></li> <li>Filter by correlation(i.e. cosine similarity) within the dataset. I used a threshold of 0.8.</li> <li>Filter by correlation with existing datasets. I used a threshold of 0.8.</li> </ol> <blockquote> <p>When filtering by correlation, it's important to utilize GPU power and process the dataset in batches. For instance, when utilizing the <code>encode</code> function provided by the "Sentence Transformer" library, enabling the "normalize_embeddings=True" parameter will return embeddings with a unit length. This allows for easy computation of cosine similarity using <code>torch.matmul</code>.</p> </blockquote> <h3>Data generation</h3> <p>I re-generated the images using Stable Diffusion v2 following the settings in the dataset description.</p> <h3>Training</h3> <ul> <li>CV split: Split the validation set according to the token length. Split into 12 ~ 15 folds and take the first fold as the validation set.</li> <li>How many layers to freeze during fine-tuning?<ul> <li>I found that the validation score did not increase after the number of training parameters reached a certain point, so I manually tune the starting unfreezing layer to make the number of training parameters as ~80 million.</li></ul></li> <li>Data augmentation: the only two methods I found helpful are <code>HorizontalFlip</code> and <code>MixUp</code>.</li> <li>Number of epochs: 4. The validation score always reached the highest point at epoch 3.</li> </ul> <h3>Ensemble</h3> <table> <thead> <tr> <th>Model</th> <th>public LB</th> <th>weight</th> </tr> </thead> <tbody> <tr> <td>clip-vit-large-patch14-336</td> <td>0.57954</td> <td>0.3</td> </tr> <tr> <td>clip-vit-large-patch14</td> <td>0.57665</td> <td>0.2</td> </tr> <tr> <td>blip-image-captioning-large</td> <td>0.57621</td> <td>0.2</td> </tr> <tr> <td>convnext_large_mlp.clip_laion2b_ft_soup_320</td> <td>0.57907</td> <td>0.3</td> </tr> </tbody> </table> <ul> <li>ensemble public LB: 0.59314</li> <li>ensemble public LB (normalise the outputs before performing ensemble): 0.59673</li> </ul> <h2>KNN regression</h2> <p>Follows the <a href="[https://www.kaggle.com/code/motono0223/sdip-clip-knnregression-zeroshot-method](https://www.kaggle.com/code/motono0223/sdip-clip-knnregression-zeroshot-method)" target="_blank">public notebook</a>, except that I also added datasets processed on my own.</p> <h2>CLIP interrogator</h2> <p>Follows the <a href="[https://www.kaggle.com/code/leonidkulyk/lb-0-45836-blip-clip-clip-interrogator](https://www.kaggle.com/code/leonidkulyk/lb-0-45836-blip-clip-clip-interrogator)" target="_blank">public notebook</a>.</p> <h2>Final thoughts</h2> <p>There are several things I didn’t have time to explore:</p> <ol> <li>Generate more datasets to increase the diversity.</li> <li>Filter more public datasets and add them to the KNN model.</li> <li>Explore more zero-shot methods.</li> </ol> <p>I’m still new to the multi-modal field, but this competition is a good starting point!</p>
Predict Student Performance from Game Play
43th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predict Student Performance from Game Play <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2>Solution</h2> <ul> <li>Create features for each level_group.<ul> <li>In addition, using the previous level_group features.</li></ul></li> <li>LGBM and XGB model for each level.</li> <li>Optimize hyperparameters for each level. (Only XGB)</li> <li>I think the amount of features is almost the same as what is in the public.</li> </ul> <h3>Not work for me</h3> <ul> <li>Catboost model</li> <li>level_group probability as feature fo stacking model.</li> <li>sample weight for each level.</li> <li>optimize threshold of f1-score for each level.</li> <li>As a feature of gbdt, using event seqence vectorize with w2v.</li> </ul> <h3>Not try yet</h3> <ul> <li>Ensenble knoledge tracing model with transformer or 1dcnn</li> <li>Optimize hyperparameters of LGBM for each level.</li> </ul> <p>repo: <a href="https://github.com/konumaru/predict_student_performance" target="_blank">https://github.com/konumaru/predict_student_performance</a></p>
Vesuvius Challenge - Ink Detection
2nd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Vesuvius Challenge - Ink Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2>tattaka &amp; mipypf part</h2> <h3>Summary</h3> <ul> <li>2.5D and 3D backbone and decoder without upsampling<ul> <li>1/32 Resolution is sufficient for this task</li> <li>By focusing resources on encoder, more layers can be used for training</li></ul></li> <li>Strong regularization</li> </ul> <h3>Data preprocessing</h3> <ul> <li>Group K-fold is used for the validation, but is divided into 3 parts because of the large ink_id=2</li> <li>Input resolution is 256x256 for 2.5D model and 192x192 for 3D model<ul> <li>Cut out and store in npy in 32x32 (64x64 when inferring) to speed up data loading</li></ul></li> <li>The label is downsampled to 1/32 resolution using bilinear interpolation</li> <li>Input images were normalized for each sample </li> </ul> <pre><code> mean = img.mean(dim=(, , ), keepdim=) std = img.std(dim=(, , ), keepdim=) + img = (img - mean) / std </code></pre> <h3>Model</h3> <ul> <li>images(batch_size x channel x group x height x width) -&gt; 2dcnn backbone -&gt; pointwise conv2d neck -&gt; 3dcnn(ResBlockCSN like, 3 or 6 blocks) -&gt; avg + max pooling(z axis) -&gt; pointwise conv2d<ul> <li>2dcnn backbone: <ul> <li>resnetrs50</li> <li>convnext_tiny</li> <li>swinv2_tiny</li> <li>resnext50</li></ul></li> <li>The channel/group combinations used were 5x7 and 3x9. In other words, the middle 35 or 27 layers of the 65 layers are used.</li> <li>Referring to <a href="https://www.kaggle.com/tereka" target="_blank">@tereka</a>'s <a href="https://www.kaggle.com/competitions/nfl-player-contact-detection/discussion/392402#2170010" target="_blank">model architecture</a></li></ul></li> <li>images(batch_size x 1 x layers x height x width) -&gt; 3dcnn backbone -&gt; max pooling(z axis) -&gt; pointwise conv2d<ul> <li>3DCNN backbone: <ul> <li>resnet50-irCSN(layers: 32)</li> <li>resnet152-irCSN(layers: 24)</li></ul></li></ul></li> <li>loss: bce + global fbeta loss(calculate fbeta score in batch)</li> </ul> <h3>Training</h3> <ul> <li>amp</li> <li>EMA(decay=0.99)</li> <li>label smoothing(smooth = 0.1)</li> <li>drop_path_rate=0.2</li> <li>cutmix + mixup + manifold mixup</li> <li>heavy augmentation<ul> <li>cutout<ul> <li>modified to match the output resolution (1/32)</li></ul></li> <li>channel shuffle<ul> <li>Shuffle within the group after splitting</li></ul></li> <li>Random +-2 shift in z-direction of volume with 0.5 probability </li> <li>other transforms</li></ul></li> </ul> <pre><code>albu.Compose( [ albu.Flip(=0.5), albu.RandomRotate90(=0.9), albu.ShiftScaleRotate( =0.0625, =0.2, =15, =0.9, ), albu.OneOf( [ albu.ElasticTransform(=0.3), albu.GaussianBlur(=0.3), albu.GaussNoise(=0.3), albu.OpticalDistortion(=0.3), albu.GridDistortion(=0.1), albu.PiecewiseAffine(=0.3), # IAAPiecewiseAffine ], =0.9, ), albu.RandomBrightnessContrast( =0.3, =0.3, =0.3 ), ToTensorV2(), ] ) </code></pre> <h3>Inference</h3> <ul> <li>fp16 inference</li> <li>stride=32</li> <li>Use weights learned on val_ink_id=(1, 2a)</li> <li>TTA<ul> <li>h/v flip</li> <li>Switch tta each time stride </li></ul></li> <li><a href="https://www.kaggle.com/philippsinger" target="_blank">@philippsinger</a> proposed <a href="https://www.kaggle.com/competitions/birdclef-2021/discussion/243463" target="_blank">percentile threshold</a><ul> <li>Always predicts the same amount of positives, so it is independent of model performance and depends on the pos/neg ratio of the GT</li> <li>Calculated for all pixels except the area outside the mask</li> <li>0.9 and 0.93 were used</li> <li>We expected 0.90 to be a better score in private, but 0.93 was better in the end!</li></ul></li> </ul> <h2>yukke42 part</h2> <h3>Summary</h3> <ul> <li>3D encoder and 2D/1D encoder<ul> <li>1/2 or 1/4 resolution prediction</li> <li>very simple decoder<br> ​</li></ul></li> </ul> <h3>Data preprocessing</h3> <ul> <li>split the 2nd fragment into two image vertically: 4 folds<br> ​</li> </ul> <h3>Model</h3> <ul> <li>3D CNN encoder and 2D Encoder<ul> <li>encoderbased on <a href="https://www.kaggle.com/samfc10" target="_blank">@samfc10</a> 's Notebook: <a href="https://www.kaggle.com/code/samfc10/vesuvius-challenge-3d-resnet-training" target="_blank">Vesuvius Challenge - 3D ResNet Training</a><ul> <li>remove maxpooling after the 1st CNN</li> <li>use attention before reduce D-dim</li> <li>use resnet18 and resnet34</li></ul></li> <li>decoder<ul> <li>a single 2D CNN layer</li> <li>upsample with a nearest interpolation</li></ul></li> <li>output resolution is downsampled to 1/2. then upsample with a bilinear interpolation</li></ul></li> <li>3D transformer encoder and linear decoder<ul> <li>encoder: use <a href="https://pytorch.org/vision/main/models/video_mvit.html" target="_blank">MViTv2-s</a> of the PyTorch official implementation and a pre-trained model<ul> <li>modify forward function to get each scale output</li> <li>replace MaxPool3d into MaxPool2d for the reproducibility</li></ul></li> <li>decoder: a single linear and patch expanding to upscale low resolutions 3D images<ul> <li>patch expanding is from <a href="https://arxiv.org/abs/2105.05537" target="_blank">Swin-Unet</a></li></ul></li> <li>output resolution is downsampled to 1/2 or 1/4, then upsample with a bilinear interpolation<br> ​</li></ul></li> </ul> <h3>Training</h3> <ul> <li>amp</li> <li>torch.compile</li> <li>label smoothing</li> <li>cutout</li> <li>cutmix</li> <li>mixup</li> <li>data augmentation<ul> <li>referred <a href="https://www.kaggle.com/tanakar" target="_blank">@tanakar</a> 's notebook: <a href="https://www.kaggle.com/code/tanakar/2-5d-segmentaion-baseline-training" target="_blank">2.5d segmentaion baseline [training]</a></li> <li>referred <a href="https://www.kaggle.com/tattaka" target="_blank">@tattaka</a> and <a href="https://www.kaggle.com/mipypf" target="_blank">@mipypf</a> 's</li></ul></li> <li>patch_size=224 and stride=112<ul> <li>stride=75 or stride=56 didn't work<br> ​</li></ul></li> </ul> <h3>Inference</h3> <ul> <li>fp16 inference</li> <li>stride=75<ul> <li>better than stride=112</li></ul></li> <li>ignore edge of output prediction<ul> <li>use only the red area of prediction (figure below)<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1745801%2Fc309c518581d183e62897d4249eb7e7e%2Fimage.png?generation=1686801349236365&amp;alt=media" alt=""></li></ul></li> </ul> <h2>Code &amp; inference notebook</h2> <ul> <li>tattaka &amp; ron part<ul> <li>training code: <a href="https://github.com/mipypf/Vesuvius-Challenge/tree/winner_call/tattaka_ron" target="_blank">https://github.com/mipypf/Vesuvius-Challenge/tree/winner_call/tattaka_ron</a></li> <li>inference code: <a href="https://www.kaggle.com/code/mipypf/ink-segmentation-2-5d-3dcnn-resnet3dcsn-fp16fold01?scriptVersionId=132226669" target="_blank">https://www.kaggle.com/code/mipypf/ink-segmentation-2-5d-3dcnn-resnet3dcsn-fp16fold01?scriptVersionId=132226669</a></li></ul></li> <li>yukke42 part<ul> <li>training code: <a href="https://github.com/yukke42/kaggle-vesuvius-challenge-ink-detection" target="_blank">https://github.com/yukke42/kaggle-vesuvius-challenge-ink-detection</a></li></ul></li> </ul>
Vesuvius Challenge - Ink Detection
9th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Vesuvius Challenge - Ink Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>ensemble = 2-fold x 2-model</p> <p>[2-model]</p> <ul> <li>resnet34d-unet (crop=256. depth=32 slices)</li> <li>pvtv2-b3-daformer (crop=384. depth=16 slices)</li> </ul> <p>[2-fold]</p> <ul> <li>validation fragment-1 and fragment-2aa </li> </ul> <hr> <p>training code and solution presentation (slide and video):<br> <a href="https://github.com/hengck23/solution-vesuvius-challenge-ink-detection" target="_blank">https://github.com/hengck23/solution-vesuvius-challenge-ink-detection</a></p> <p>the public submission notebook (clean version) is at:<br> <a href="https://www.kaggle.com/code/hengck23/9th-place-final-ensemble-v1?scriptVersionId=136499959" target="_blank">https://www.kaggle.com/code/hengck23/9th-place-final-ensemble-v1?scriptVersionId=136499959</a></p> <hr> <p>more solution discussion : <br> <a href="https://www.kaggle.com/competitions/vesuvius-challenge-ink-detection/discussion/407972" target="_blank">https://www.kaggle.com/competitions/vesuvius-challenge-ink-detection/discussion/407972</a></p> <p>it is smiliar to <a href="https://www.kaggle.com/tanakar" target="_blank">@tanakar</a> 11th place solution solution:<br> <a href="https://www.kaggle.com/competitions/vesuvius-challenge-ink-detection/discussion/417281" target="_blank">https://www.kaggle.com/competitions/vesuvius-challenge-ink-detection/discussion/417281</a></p>
March Machine Learning Mania 2023
4th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: March Machine Learning Mania 2023 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Details:<br> Private Leaderboard Score: 0.17557<br> Private Leaderboard Place: 4th<br> Name: Nicholas Hilton<br> Location: Brooklyn, NY<br> Email: <a href="mailto:[email protected]">[email protected]</a></p> <h2>My Background</h2> <p>I graduated from the University of Cambridge with a maths degree in 2015. From there I worked at Manchester City Football Club before teaching myself how to code and getting a job at Tribe Dynamics, a start up in San Francisco. There I learnt how to code professionally and data science skills. After 5 years in SF I left Tribe Dynamics to work as an Machine Learning Engineer for Gro Intelligence in New York where I am today.</p> <p>I have been doing this competition since 2018 and have enjoyed it every year as I both enjoy the actual competition on TV and it gives me a chance to do some data science outside of my job. I used roughly the same method every year and it has performed a bit better every year as I make improvements. In total I spent about 3 weeks total on the competition over the last 5 years, this year making a few small tweaks before running my training process with the new data.</p> <h2>Model Summary</h2> <p>My model implements an ELO rating system and then uses those ratings to produce predictions for each matchup, adjusting for a few other features (rebounding, field goal percentage (overall and 3 point), seed). The rating difference and adjustments are then passed into a link function which converts the adjusted rating difference to a probability for the matchup.</p> <p>The training process involved tuning the parameters of the model, mainly the feature adjustments and the rate at which ELO ratings are updated after each match. A single training run would simulate all matches from 2010, keeping track of ratings over time, using the tournament score for each season as the metric to evaluate that set of params on. During the regular season there was also an adjustment for home or away teams but in the tournament all matches were treated as neutral.</p> <p>I think the training process could be optimised in the future as I ran a grid search over a sensible parameter space and let my macbook run all day chugging through the param space.</p> <h3>Final Params</h3> <p>Final ELO ratings are spread over a ~400 point range for tournament teams so weights below are related to that rating spread.</p> <p><em>Mens:</em><br> 'k': 150 # how fast the ELO ratings update - at the high end of the param range tested<br> 'seed': -40 # the seed difference to adjust - Medium End<br> 'link': 'N' # The response function to turn a rating diff into a probability - Normal distribution<br> 'fgp': 1200.0 # Field Goal Percentage difference - High end<br> 'fgp3': -10.0 # Field Goal 3pt Percentage difference - Low end<br> 'reb': 15.0 # Rebounding Average difference - Medium End</p> <p><em>Womens:</em><br> 'k': 80 # Medium End<br> 'seed': -40 # Medium End<br> 'link': 'L' # Logistic Function<br> 'fgp': 1200.0 # High end<br> 'fgp3': 0.0 # Low end<br> 'reb': 15.0 # Medium End</p> <h2>References</h2> <p>Model Repo can be found <a href="https://github.com/NickHilton/March-Madness" target="_blank">here</a><br> Write up of details on the Elo system (for soccer but the initial work I based this model off) can be found <a href="https://drive.google.com/file/d/1mCGCTWuxvyWAXaOhiqt0ncUExs4LsOPf/view" target="_blank">here</a><br> The idea is similar to 538’s methodology <a href="https://fivethirtyeight.com/methodology/how-our-march-madness-predictions-work-2/" target="_blank">here</a></p> <h2>Interesting Insights</h2> <p>I think a key improvement I made to the model last year (when I placed in the top 50) and which probably helped me this year, was simulating the tournament matches and updating the ratings throughout the tournament. I.e. If a matchup was a R2 matchup between W1 and W8 then W1 must have beaten W16 and W8 must have beaten W9 and so you can update the ratings before predicting each matchup in the tournament. I found a rating system which had a higher importance on recent form, performed better this year, and probably helped with the number of upsets that happened (Iowa for example in the women’s tournament I ended up rating highly at the end)</p> <p>Another thing which helped me a bit was picking a team to do well and adjusting their ratings as a sort of gamble. I don’t think its as important now there are 126 games, but I picked Princeton to do well in the mens tournament and updated their ratings to do so. I figured that if you are competing in a pool of &gt;1k entrants, who are all likely to have good models, having a ~5% chance of a big boost to your score means that whilst your mean score might go down, your likelihood of winning goes up (go big or go home). That being said, I only should have picked them to win 2 games to get the boost I needed as when they lost their third game most of the advantage I had was wiped out. In hindsight, rerunning my model without this manual choice gives a score which would still have put me in 4th place.</p> <h2>Model Simplicity</h2> <p>My model is actually relatively simple from a Machine Learning point of view. It doesn’t use any advanced ML libraries, instead it is a tried and tested algorithm for rating sports teams/players (mainly Chess) and I was happy to have found a way to apply this to the tournament. I think the time consuming part came from building out the actual code to implement the ELO model with the data structured as it is as well as running the grid search over the param space.</p> <h2>Final Thoughts</h2> <p>I thoroughly enjoyed this competition, I have done in prior years when I wasn’t near the top as it takes one of the highlights of the sports year and allows me to nerd out on it. I will keep doing this competition and look forward to thinking of improvements to my model for next year. </p> <p>I also appreciate that with the scores so tight at the top of the leaderboard, and games coming down to final shots regularly, a few baskets made or missed could have sent me tumbling down the leaderboard. The beauty of competitions like these based on sports is the drama that unfolds as you watch your model perform in real time, and it really was a fun couple of weeks, and a good time to get a bit lucky.</p>
Vesuvius Challenge - Ink Detection
3rd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Vesuvius Challenge - Ink Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, we want to thank the hosts who brought us a great opportunity to be a part of resurrecting an ancient library, which is really awesome! And congrats to all winners!</p> <h2>Summary</h2> <p>Our final solution chosen is a 15-unet-ensemble, which only use ir-CSN as 3d encoder with simple mean pooling bridging to 2d decoder. This idea refers to <a href="https://www.kaggle.com/samfc10" target="_blank">@samfc10</a>'s Notebook: <a href="https://www.kaggle.com/code/samfc10/vesuvius-challenge-3d-resnet-training" target="_blank">Vesuvius Challenge - 3D ResNet Training</a>. And our training and inference pipeline refer to <a href="https://www.kaggle.com/tanakar" target="_blank">@tanakar</a>'s great scalable sharing: <a href="https://www.kaggle.com/code/tanakar/2-5d-segmentaion-baseline-training/comments" target="_blank">2.5d segmentaion baseline</a>. Let me explain the details below.</p> <h2>Dataset</h2> <h3>Fold split</h3> <p>At first we just use the original 3 fragments for our local validation, and we got a very bad score on our first ensemble whose fold 2 scored the worst. So we thought it might caused by less training sample on fold 2.</p> <p>We split fragment 2 to 3 fragments referring to <a href="https://www.kaggle.com/junxhuang" target="_blank">@junxhuang</a>'s <a href="https://www.kaggle.com/competitions/vesuvius-challenge-ink-detection/discussion/398329#2201953" target="_blank">comment</a> and <a href="https://www.kaggle.com/tattaka" target="_blank">@tattaka</a>'s <a href="https://www.kaggle.com/competitions/vesuvius-challenge-ink-detection/discussion/401667" target="_blank">discussion</a>.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F9735064%2F7cbdc92edfa4491a4405db81528f99ce%2F1.png?generation=1686897867291657&amp;alt=media" alt=""></p> <p>And we found it interesting that our best public score 0.77 is a single ir-CSN-r50 unet trained on fold 2 with 4 rotation tta which gets 0.66 locally and 0.73 without tta. Of all our experiments, fold 2 always gets high score on public lb comparing to other folds.</p> <h3>Slice choose</h3> <p>We had struggled a long time on how many slices can bring us higher score. We chose slice following <a href="https://www.kaggle.com/tanakar" target="_blank">@tanakar</a>'s baseline, and experimented <code>8*i</code>slices with i select from 1, 2, 3, 4, 6. 24, 32 and 48 slices all gave us robust performance according to lb and cv. After seeing <a href="https://www.kaggle.com/pavelgonchar" target="_blank">@pavelgonchar</a>'s <a href="https://www.kaggle.com/competitions/vesuvius-challenge-ink-detection/discussion/403348#2235071" target="_blank">discussion</a>, we stay in 24 slices for most of our experiments.</p> <h3>Patch size</h3> <p>We spent almost 2/3 time in tuning model on 224x224 cropping size patches, because the smaller the size is the more training samples we can get. And cropping stride 224//2 gave us 13272 samples in total, which is sufficient to train a r50 type unet. The most robust single r50 type model trained on 224x224 cropping size and 224//2 cropping stride is the fold 5 scoring 0.74(with 4 rotation tta) on lb and 0.74 cv.</p> <p>In the last two weeks, after reading many brilliant winning solutions, e.g. <a href="https://www.kaggle.com/hesene" target="_blank">@hesene</a>'s <a href="https://www.kaggle.com/competitions/uw-madison-gi-tract-image-segmentation/discussion/337468" target="_blank">3rd place solution</a>, we decide to use bigger resolution and use multi-size cropping inference as our final submission. Because it is easy to get in mind that big cropping size can get the entire character for models. But there is a problem that the bigger patch we crop the little samples we get. When using a 864x864 size patch with 864//6 , our r152 type unet can easily overfit within 2 epochs. So we make some trade-off there, doing experiments on 500-700 size patch, and these sizes work for us. The best single big resolution model is 576x576 patch size r152 type unet on fold 5, scoring 0.77 locally.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F9735064%2F0598a60aad240ecc111899ca897d5975%2F2.png?generation=1686897881870354&amp;alt=media" alt=""></p> <h2>Models</h2> <h3>Architecture</h3> <p>Only unet is used for our segmentation models.</p> <h3>Backbone</h3> <p>In most of our experiments, we chose ir-CSN-r50 as encoder of our unets. After reading <a href="https://www.kaggle.com/selimsef" target="_blank">@selimsef</a>'s <a href="https://www.kaggle.com/competitions/rsna-2022-cervical-spine-fracture-detection/discussion/364837" target="_blank">4th place solution</a> and <a href="https://www.kaggle.com/nvnnghia" target="_blank">@nvnnghia</a>'s <a href="https://www.kaggle.com/competitions/nfl-player-contact-detection/discussion/391635" target="_blank">1st place solution</a>, we stick to openmmlab's great implementation and pretrained weights.</p> <h3>Decoder</h3> <p>This is our first time participating a segmentation competition. At the very beginning, we spent so much time to build a better decoder, but after reading some solutions, we found that the channels of decoder is the most unconcerned part. From the experiments, light weight decoders perform good enough, so all our unets' decoder channel start from 256. Our decoder code is using <a href="https://github.com/selimsef/xview3_solution" target="_blank">selimsef's xview3_solution</a>, and we made some modifications.</p> <h3>Backbone-data align</h3> <p>At first we gave model with 1 channel voxel. We need to sum the first conv pretrained weights to 1 channel like <a href="https://www.kaggle.com/samfc10" target="_blank">@samfc10</a> did.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F9735064%2Fe683a7577522d400d937e953a738270f%2F3.png?generation=1686897894991869&amp;alt=media" alt=""></p> <p>But after reading <a href="https://www.kaggle.com/pelegshilo" target="_blank">@pelegshilo</a>'s <a href="https://www.kaggle.com/competitions/vesuvius-challenge-ink-detection/discussion/395676#2196236" target="_blank">comment</a> and <a href="https://www.kaggle.com/hengck23" target="_blank">‎‎‎‎‎‎‎‎@hengck23</a>'s great <a href="https://www.kaggle.com/competitions/vesuvius-challenge-ink-detection/discussion/407972#2302894" target="_blank">topic</a>, we thought maybe we can make a full use of pretrained weights of first conv and let model choose the best z location. We simply do concatenation from temporal dimension to channel dimension with 3 overlapped voxels.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F9735064%2Fd1c1046526fca219835eaff4e0bdb42e%2F4.png?generation=1686897907062423&amp;alt=media" alt=""></p> <p>In our experiments, input voxel size (28,224,224) with overlapped start indices [0,2,4] which give 3 24-slice voxels, boosting cv from 0.01 to 0.03 on different folds.</p> <h2>Training settings</h2> <h3>Data correlated</h3> <ul> <li><p>We use <a href="https://www.kaggle.com/code/tanakar/2-5d-segmentaion-baseline-training/comments" target="_blank">2.5d segmentaion baseline</a> provided augmentations and change the probabilities for different experiment. </p></li> <li><p>We set the ShiftScaleRotate's rotate_limit to 180 referring to <a href="https://www.kaggle.com/hengck23" target="_blank">‎‎‎‎‎‎‎‎@hengck23</a>'s <a href="https://www.kaggle.com/competitions/vesuvius-challenge-ink-detection/discussion/407972#2302894" target="_blank">topic</a>.</p></li> <li><p>Using mixup and cutmix boost cv differently on each fold and with different mix probabilities. We comment out cutout when use cutmix.</p></li> <li><p>Segmenting on normalized pixels works, referring to <a href="https://www.kaggle.com/yoyobar" target="_blank">@yoyobar's</a> <a href="https://www.kaggle.com/code/yoyobar/3d-resnet-baseline-inference" target="_blank">sharing</a>.</p></li> <li><p>We use clean input meaning we filtered out all zero input.</p></li> </ul> <h3>Model correlated</h3> <ul> <li><p>We use adamw optimizer and onecycle scheduler.</p></li> <li><p>We use BCE and hard dice with different weighted average in our experiments.</p></li> <li><p>We use EMA and it boosts cv sometimes.</p></li> <li><p>DDP training and validation is used when training models on big resolution patches.</p></li> <li><p>Gradient accumulation is used when training models on big resolution patches. We use batch size 16 in all our experiments.</p></li> </ul> <h2>Inference</h2> <ul> <li>We use smaller cropping stride comparing to which we use in training.</li> <li>Multi-size cropping inference ensemble help us survive in such shrinking</li> <li>We do not select threshold for each model prediction. We just use 0.5.</li> </ul> <p>Our final solution is a 224-384-576-15-unet ensemble with no tta which scoring 0.76 on public. In this setting we use 28-24 slice selection that means slice file id range from 18 to 46 and copying it to 3 24-slice voxels with slice start index [0,2,4]. For 576 size model we scale up to r152 and the rest are r50. This give us 3rd place on private.</p> <p>We find that rotation tta do not boost much when add some big resolution models.</p> <p>But we have 6 different ensemble settings higher than 0.682 private score. The best one scoring 0.687873 on private with 0.749338 on public. In this solution we add 5 more r152 unets with training input size 640 which has 36-32 slice with start index [0,2,4]. We didn't choose it because the low lb and some overfitting performance between train loss and validation loss.<br> Here we list two of them.</p> <table> <thead> <tr> <th>ensemble models</th> <th>input size</th> <th>number of slices</th> <th>cropping stride</th> <th>public score</th> <th>private score</th> </tr> </thead> <tbody> <tr> <td>r50,r50,r152</td> <td>224,384,576</td> <td>28-24,28-24,28-24</td> <td>224//8,384//6,576//8</td> <td>0.763384</td> <td>0.681137</td> </tr> <tr> <td>r50,r50,r152,r152</td> <td>224,384,576,640</td> <td>28-24,28-24,28-24, 36-32</td> <td>224//2,384//3,576//4,640//5</td> <td>0.749338</td> <td>0.687873</td> </tr> <tr> <td>r152</td> <td>576</td> <td>28-24</td> <td>576//12</td> <td>0.720028</td> <td>0.683273</td> </tr> </tbody> </table> <h2>Tried but didn't work</h2> <ul> <li>2d models</li> <li>pure 3d models</li> <li>stacked unet which use 2d denoiser</li> <li>training a 3d denoiser</li> <li>classification branch for stronger supervision</li> <li>add maxpooling and conv layers between encoder and decoder</li> </ul> <h2>Acknowledgement</h2> <p>We want to thank all those kagglers we mentioned above. We have learned a lot from your sharing, and there is still a lot we need to learn.</p> <p>We also want to thank the kaggle community, everyone here is free to share their insight that is really helpful for one's growing up.</p> <p>Again, Thanks to the hosts who bring such great competition! We have seen a lot of brilliant teams with wonderful solution.</p> <p>This is my teammate <a href="https://www.kaggle.com/yufujiang" target="_blank">@yufujiang</a>'s first time participating in a kaggle competition who is also my roommate at college, many thanks to him. And this is my 3rd time competition on kaggle, finally I got my first medal!</p> <h2>Code</h2> <p><a href="https://www.kaggle.com/code/traptinblur/3rd-place-ensemble-576-8-384-6-224-8/notebook" target="_blank">3rd place inference code</a><br> <a href="https://github.com/traptinblur/VCID_2023_3rd_place_code/tree/main" target="_blank">3rd place training code repo</a></p>
Vesuvius Challenge - Ink Detection
12th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Vesuvius Challenge - Ink Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1><strong>12th place solution</strong></h1> <p>Thank you to the organizers for hosting this exciting competition and congratulations to all the winners! This is my first time participating in a competition like this, and below is my solution and approach.</p> <h2><strong>overview</strong></h2> <p>we use 3D Resnet(<a href="https://github.com/kenshohara/3D-ResNets-PyTorch/blob/master/models/resnet.py" target="_blank">https://github.com/kenshohara/3D-ResNets-PyTorch/blob/master/models/resnet.py</a>) for our encoder and CNN decoder.</p> <p>We tried different decoders, such as Unet, Uperhead, CNN, and ResCNN. In the end, the CNN decoder, while being the simplest, achieved the best results.</p> <h2><strong>model architecture</strong></h2> <table> <thead> <tr> <th>backbone</th> <th>decoder</th> <th>Z-DIM</th> </tr> </thead> <tbody> <tr> <td>3DResnet-18</td> <td>CNN</td> <td>22</td> </tr> <tr> <td>3DResnet-34</td> <td>CNN</td> <td>22</td> </tr> </tbody> </table> <h2><strong>training</strong></h2> <p>Z_DIM:21-&gt;43<br> img_size:224<br> stride_size:56</p> <h3>augmentation:</h3> <p>A.Rotate(limit=90, p=0.5),<br> A.HorizontalFlip(p=0.5),<br> A.VerticalFlip(p=0.5),<br> A.RandomBrightnessContrast(p=0.75),<br> A.ShiftScaleRotate(p=0.75),<br> A.OneOf([<br> A.GaussNoise(),<br> A.GaussianBlur(),<br> A.MotionBlur(),<br> ], p=0.5),<br> A.GridDistortion(num_steps=5, distort_limit=0.3, p=0.5),<br> A.CoarseDropout(max_holes=1, max_width=int(size * 0.3), max_height=int(size * 0.3), <br> mask_fill_value=0, p=0.5),<br> A.Normalize(<br> mean= [0] * in_chans,<br> std= [1] * in_chans<br> ),<br> ToTensorV2(transpose_mask=True)</p> <h3>training details</h3> <p>We used two different training methods. The first method involved selecting one folder from 1, 2, and 3 as the validation set and training three models. However, due to poor performance on validation set 2, we only used models trained with validation sets 1 and 3 for our 3DRsnet-18 model. The second method involved randomly selecting image blocks from 1, 2, and 3 as the validation set. We used this method to train 4 models: two based on 3DResnet-18 with a stride of 56 and 37, and another two based on 3DResnet-34 with a stride of 56 and 37.</p> <h2>others</h2> <p>BCELoss<br> Adamw<br> GradualWarmupSchedulerV2(<a href="https://www.kaggle.com/code/underwearfitting/single-fold-training-of-resnet200d-lb0-965" target="_blank">https://www.kaggle.com/code/underwearfitting/single-fold-training-of-resnet200d-lb0-965</a>)</p> <h2><strong>inference</strong></h2> <p>img_size:224<br> stride_size:224<br> th:0.5<br> TTA: 4 rotates, h/v flips<br> last model:<br> (3DResnet-18 + CNN) * 4 + (3DResnet-34 +CNN) * 2<br> public:0.786897<br> private:0.652892</p> <p>training code:<a href="https://github.com/tkz24589/Deep_Learning_work_space/tree/main/convolution/segmentation/vesuvius-challenge-ink-detection-tutorial" target="_blank">https://github.com/tkz24589/Deep_Learning_work_space/tree/main/convolution/segmentation/vesuvius-challenge-ink-detection-tutorial</a><br> notebook example: <a href="https://www.kaggle.com/code/kongzhangtang/resnet3d-cnn" target="_blank">https://www.kaggle.com/code/kongzhangtang/resnet3d-cnn</a></p>
AMP®-Parkinson's Disease Progression Prediction
3rd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: AMP®-Parkinson's Disease Progression Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, I would like to thank Kaggle and AMP PD for organizing this great competition. I would also like to thank all the kagglers who participated in this competition. I am very happy to have won my first gold medal and also to have won a prize.</p> <p>My solution is simple and consists of three main functions.</p> <h2>1. Grouping</h2> <p>As many have pointed out, I have divided the groups into two, each optimized for a different visit interval (6 or 12 months).<br> One key point is that we need to focus not only on the 6th month, but also on the 18th month. There are patients who are missing the 6th month but have the 18th month present, and this patient is not healthy.<br> By using the cumulative minimum function, I considered patients with either the 6th month or the 18th month present as unhealthy.</p> <h4>[Group A : Healthy]</h4> <ul> <li>Patients with a minimum visit interval of 12 months or more</li> </ul> <h4>[Group B : Unhealthy]</h4> <ul> <li>Patients with a minimum visit interval of 6 months or less</li> </ul> <h2>2. Labeling (mainly Group B)</h2> <p>As I looked at the data for Group B, the unhealthy group, I found several patterns. The frequency of protein collection and protein information are linked to the severity of symptoms. I then generated several labels based on protein collection frequency and protein information and used them as features.</p> <p>The following 9 labels were finally adopted.</p> <h4>[more severe symptoms]</h4> <ul> <li>Protein was collected at 6 months</li> <li>Protein was collected at 6 months and again at 12 months</li> <li>Low number of unique "UniPort" (approximately the bottom 20%)</li> <li>Low number of unique "UniPort" (approximately the bottom 10%)</li> </ul> <h4>[milder symptoms]</h4> <ul> <li>Protein not collected at 6 months</li> <li>Protein was collected at 6 months but not at 12 months</li> <li>Protein not collected at 6 months, but collected at 18 months</li> <li>High number of unique "UniPort" and high change of "Peptide" from the previous measurement (approximately the top 20%)</li> <li>High number of unique "UniPort" and high change of "Peptide" from the previous measurement (approximately the top 10%)</li> </ul> <h2>3. Modeling</h2> <p>Initially, I also tried LightGBM using these features, but LB worsened and I did not use them in my final model. In my final model, I used these features to obtain the coefficients (severity) by grid search.<br> Due to the small sample size of the train data (248 patients), some labels improved for the train data (248 patients) but worsened for the LB (50 patients). In my various experiments, I decided to adopt only those that improved both train and LB (248 + 50 patients).<br> I thought this would be a more robust model. As a result, the final scores were also stable.</p>
Parkinson's Freezing of Gait Prediction
1st place solution: transformer and acceleration data
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Parkinson's Freezing of Gait Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Greetings to the Kaggle Community. In this message I want to tell you about my solution.</p> <p>Thanks to Kaggle for providing free GPU and TPU resources to everyone. On my graphics card (1050 Ti) I would not have achieved those results.<br> Thanks to Google for the excellent tensorflow library.<br> All of my work was done in Kaggle Notebooks and relies on TensorFlow capabilities.</p> <p>The key decisions that, in my opinion, led to a good result:</p> <ol> <li>Use a combination of transformer encoder and two BidirectionalLSTM layers.</li> <li>Use patches like VisualTransformer.</li> <li>Reduce the resolution of targets.</li> </ol> <p><em>How does it work?</em></p> <p>Suppose we have a tdcsfog sensor data series with AccV, AccML, AccAP columns and len of 5000.</p> <p>First, apply mean-std normalization to AccV, AccML, AccAP columns.</p> <pre><code> (): mean = tf.math.reduce_mean(sample) std = tf.math.reduce_std(sample) sample = tf.math.divide_no_nan(sample-mean, std) sample.numpy() </code></pre> <p>Then the series is zero-padded so that the final length is divisible by block_size = 15552 (or 12096 for defog). Now the series shape is (15552, 3). </p> <p>And create patches with the patch_size = 18 (or 14 for defog):</p> <pre><code>series series = tf.reshape(series, shape=(CFG[] // CFG[], CFG[], )) series = tf.reshape(series, shape=(CFG[] // CFG[], CFG[]*)) </code></pre> <p>Now the series shape is (864, 54). It's a model input.</p> <p>What to do with the StartHesitation, Turn, Walking data? Same, but apply tf.reduce_max at the end.</p> <pre><code>series_targets series_targets = tf.reshape(series_targets, shape=(CFG[] // CFG[], CFG[], )) series_targets = tf.transpose(series_targets, perm=[, , ]) series_targets = tf.reduce_max(series_targets, axis=-) </code></pre> <p>Now the series shape is (864, 3). It's a model output.</p> <p>At the end, simply return the true resolution with tf.tile</p> <pre><code>predictions = model.predict(...) predictions = tf.expand_dims(predictions, axis=-) predictions = tf.transpose(predictions, perm=[, , , ]) predictions = tf.tile(predictions, multiples=[, , CFG[], ]) predictions = tf.reshape(predictions, shape=(predictions.shape[], predictions.shape[]*predictions.shape[], )) </code></pre> <h1>Details</h1> <p>Daily data, events.csv, subjects.csv, tasks.csv have never been used.</p> <p>Tdcsfog data is not used to train defog models. </p> <p>Defog data is not used to train tdcsfog models.</p> <p><em>Optimizer</em> </p> <pre><code>tf.keras.optimizers.Adam(learning_rate=Schedule(LEARNING_RATE, WARMUP_STEPS), beta_1=, beta_2=, epsilon=) </code></pre> <p><em>Loss function</em></p> <pre><code> ce = tf.keras.losses.BinaryCrossentropy(reduction=) (): loss = ce(tf.expand_dims(real[:, :, :], axis=-), tf.expand_dims(output, axis=-)) mask = tf.math.multiply(real[:, :, ], real[:, :, ]) mask = tf.cast(mask, dtype=loss.dtype) mask = tf.expand_dims(mask, axis=-) mask = tf.tile(mask, multiples=[, , ]) loss *= mask tf.reduce_sum(loss) / tf.reduce_sum(mask) </code></pre> <p><em>Model</em> </p> <pre><code>CFG = {: , : , : //, : , : , : , : , : , : , : , : , } (tf.keras.layers.Layer): (): ().__init__() self.mha = tf.keras.layers.MultiHeadAttention(num_heads=CFG[], key_dim=CFG[], dropout=CFG[]) self.add = tf.keras.layers.Add() self.layernorm = tf.keras.layers.LayerNormalization() self.seq = tf.keras.Sequential([tf.keras.layers.Dense(CFG[], activation=), tf.keras.layers.Dropout(CFG[]), tf.keras.layers.Dense(CFG[]), tf.keras.layers.Dropout(CFG[]), ]) (): attn_output = self.mha(query=x, key=x, value=x) x = self.add([x, attn_output]) x = self.layernorm(x) x = self.add([x, self.seq(x)]) x = self.layernorm(x) x (tf.keras.Model): (): ().__init__() self.first_linear = tf.keras.layers.Dense(CFG[]) self.add = tf.keras.layers.Add() self.first_dropout = tf.keras.layers.Dropout(CFG[]) self.enc_layers = [EncoderLayer() _ (CFG[])] self.lstm_layers = [tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(CFG[], return_sequences=)) _ (CFG[])] self.sequence_len = CFG[] // CFG[] self.pos_encoding = tf.Variable(initial_value=tf.random.normal(shape=(, self.sequence_len, CFG[]), stddev=), trainable=) (): x = x / x = self.first_linear(x) training: random_pos_encoding = tf.roll(tf.tile(self.pos_encoding, multiples=[GPU_BATCH_SIZE, , ]), shift=tf.random.uniform(shape=(GPU_BATCH_SIZE,), minval=-self.sequence_len, maxval=, dtype=tf.int32), axis=GPU_BATCH_SIZE * [], ) x = self.add([x, random_pos_encoding]) : x = self.add([x, tf.tile(self.pos_encoding, multiples=[GPU_BATCH_SIZE, , ])]) x = self.first_dropout(x) i (CFG[]): x = self.enc_layers[i](x) i (CFG[]): x = self.lstm_layers[i](x) x (tf.keras.Model): (): ().__init__() self.encoder = FOGEncoder() self.last_linear = tf.keras.layers.Dense() (): x = self.encoder(x) x = self.last_linear(x) x = tf.nn.sigmoid(x) x </code></pre> <h1>Submission (Private Score 0.514, Public Score 0.527) consists of 8 models:</h1> <h3>Model 1 (tdcsfog model)</h3> <pre><code>CFG = {: , : , : //, : , : , : , : , : , : , : , : , } LEARNING_RATE = / STEPS_PER_EPOCH = WARMUP_STEPS = BATCH_SIZE= </code></pre> <p>Validation subjects <br> ['07285e', '220a17', '54ee6e', '312788', '24a59d', '4bb5d0', '48fd62', '79011a', '7688c1']</p> <p>Train 15 minutes on TPU. Validation scores:<br> StartHesitation AP - 0.462 Turn AP - 0.896 Walking AP - 0.470 mAP - 0.609</p> <h3>Model 2 (tdcsfog model)</h3> <pre><code>CFG = {: , : , : //, : , : , : , : , : , : , : , : , } LEARNING_RATE = / STEPS_PER_EPOCH = WARMUP_STEPS = BATCH_SIZE = </code></pre> <p>Validation subjects <br> ['07285e', '220a17', '54ee6e', '312788', '24a59d', '4bb5d0', '48fd62', '79011a', '7688c1']</p> <p>Train 40 minutes on GPU. Validation scores:<br> StartHesitation AP - 0.481 Turn AP - 0.886 Walking AP - 0.437 mAP - 0.601</p> <h3>Model 3 (tdcsfog model)</h3> <pre><code>CFG = {: , : , : //, : , : , : , : , : , : , : , : , } LEARNING_RATE = / STEPS_PER_EPOCH = WARMUP_STEPS = BATCH_SIZE = </code></pre> <p>Validation subjects <br> ['e39bc5', '516a67', 'af82b2', '4dc2f8', '743f4e', 'fa8764', 'a03db7', '51574c', '2d57c2']</p> <p>Train 11 minutes on TPU. Validation scores:<br> StartHesitation AP - 0.601 Turn AP - 0.857 Walking AP - 0.289 mAP - 0.582</p> <h3>Model 4 (tdcsfog model)</h3> <pre><code>CFG = {: , : , : //, : , : , : , : , : , : , : , : , } LEARNING_RATE = / STEPS_PER_EPOCH = WARMUP_STEPS = BATCH_SIZE = </code></pre> <p>Validation subjects <br> ['5c0b8a', 'a03db7', '7fcee9', '2c98f7', '2a39f8', '4f13b4', 'af82b2', 'f686f0', '93f49f', '194d1d', '02bc69', '082f01']</p> <p>Train 13 minutes on TPU. Validation scores:<br> StartHesitation AP - 0.367 Turn AP - 0.879 Walking AP - 0.194 mAP - 0.480</p> <h3>Model 5 (defog model)</h3> <pre><code>CFG = {: , : , : //, : , : , : , : , : , : , : , : , } LEARNING_RATE = / STEPS_PER_EPOCH = WARMUP_STEPS = BATCH_SIZE = </code></pre> <p>Validation subjects <br> ['00f674', '8d43d9', '107712', '7b2e84', '575c60', '7f8949', '2874c5', '72e2c7']</p> <p>Train data: defog data, notype data<br> Validation data: defog data, notype data</p> <p>Train 45 minutes on TPU. Validation scores:<br> StartHesitation AP - [not used] Turn AP - 0.625 Walking AP - 0.238 mAP - 0.432<br> Event AP - 0.800</p> <h3>Model 6 (defog model)</h3> <pre><code>CFG = {: , : , : //, : , : , : , : , : , : , : , : , } </code></pre> <p>Train data: defog data (about 85%)<br> Validation data: defog data (about 15%), notype data (100%)</p> <h3>Model 7 (defog model)</h3> <pre><code>CFG = {: , : , : //, : , : , : , : , : , : , : , : , } LEARNING_RATE = / STEPS_PER_EPOCH = WARMUP_STEPS = BATCH_SIZE = </code></pre> <p>Train data: defog data (100%)<br> Validation data: notype data (100%)</p> <p>Train 18 minutes on TPU. Validation scores:<br> StartHesitation AP - [not used] Turn AP - [not used] Walking AP - [not used] mAP - [not used]<br> Event AP - 0.764</p> <h3>Model 8 (defog model)</h3> <pre><code>CFG = {: , : , : //, : , : , : , : , : , : , : , : , } LEARNING_RATE = / STEPS_PER_EPOCH = WARMUP_STEPS = BATCH_SIZE = </code></pre> <p>Validation subjects<br> ['12f8d1', '8c1f5e', '387ea0', 'c56629', '7da72f', '413532', 'd89567', 'ab3b2e', 'c83ff6', '056372']</p> <p>Train data: defog data, notype data<br> Validation data: defog data, notype data</p> <p>Train 28 minutes on TPU. Validation scores:<br> StartHesitation AP - [not used] Turn AP - 0.758 Walking AP - 0.221 mAP - 0.489<br> Event AP - 0.744</p> <h1>Final models</h1> <p>Tdcsfog: 0.25 * Model 1 + 0.25 * Model 2 + 0.25 * Model 3 + 0.25 * Model 4</p> <p>Defog: 0.25 * Model 5 + 0.25 * Model 6 + 0.25 * Model 7 + 0.25 * Model 8</p>
AMP®-Parkinson's Disease Progression Prediction
1st Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: AMP®-Parkinson's Disease Progression Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all we would like to thank Kaggle and AMP PD for hosting this competition and providing a great dataset to dig into and rich enough to get lost in it for several months. It is always an extra motivation to work on problems that can bring value to the medical field or contribute to scientific research. Additional words of gratitude of course go to <a href="https://www.kaggle.com/kyakovlev" target="_blank">@kyakovlev</a> for the amazing work he has done, I think we formed a great team for this competition and the results reflect that.</p> <p>We publish the full code of the winning notebook <a href="https://www.kaggle.com/dott1718/amp-model-stack3" target="_blank">here</a></p> <h2>Quick summary</h2> <p>Our final solution is a simple average of two models: LGB and NN. Both models were trained on the same features (+ scaling/binarization for NN):</p> <ul> <li>Visit month</li> <li>Forecast horizon</li> <li>Target prediction month</li> <li>Indicator whether blood was taken during the visit</li> <li>Supplementary dataset indicator</li> <li>Indicators whether a patient visit occurred on 6th, 18th and 48th month</li> <li>Count of number of previous “non-annual” visits (6th or 18th)</li> <li>Index of the target (we pivot the dataset to have a single target column)</li> </ul> <p>The winning solution fully ignores the results of the blood tests. We’ve tried hard to find any signal in this crucial piece of the data, but unfortunately we came to the conclusion that none of our approaches or models can benefit from blood test features significant enough to distinguish it from random variations. The final models were trained only on the union of clinical and supplementary datasets.</p> <h2>LGB</h2> <p>For the entire duration of the competition LGB was our model to beat and only a NN trained with the competition metric as the loss function was able to achieve competitive performance on CV. At first, we tried running a regression LGB model with different hyperparameters and custom objective functions, but nothing was better than l1 regression, which does not optimize the desired metric SMAPE+1. We also noticed that on CV the performance of every model is always better when the regression outputs are rounded to integers. Then we switched to an alternative approach.</p> <p>Our LGB model is a classification model with 87 target classes (0 to maximum target value) and logloss objective. To produce the forecast we applied the following post-processing: given the predicted distribution of target classes, pick a value that minimizes SMAPE+1. Taking into account the observation that the optimal predictions are always integers, the task boils down to a trivial search among 87 possible values. Such an approach would have worked well for the original SMAPE metric also, because the approach treats cases with multiple local minimums naturally.</p> <p>We ran an optimization routine to tune LGB hyperparameters to minimize SMAPE+1 on CV using the described post-processing.</p> <h2>NN</h2> <p>The neural network has a simple multi-layer feed-forward architecture with a regression target, using the competition metric SMAPE+1 as the loss function. We fixed the number of epochs and scheduler, and then tuned the learning rate and hidden layer size. The only trick there was to add a leaky relu activation as the last layer to prevent NN from getting stuck at negative predictions. Of course there are alternative ways to solve this issue.</p> <h2>Cross-Validation</h2> <p>We’ve tried multiple cross-validation schemes due to the small training sample size, all of them were stratified by patient id. Once a sufficient number of folds is used, they all are quite well correlated to each other. Better than to the public leaderboard :) The final scheme we relied on was leave-one-(patient)-out or, in other words, a group k-fold cross validation with a fold for each patient. We used it because it doesn’t depend on random numbers. The cross-validation correlated well enough with the private leaderboard, and the submit we chose turned out to be our best private LB submission.</p> <h2>What worked</h2> <p>The most impactful feature was the indication of whether a patient visit happened on the 6th month or not. It correlates strongly with the UPDRS targets (especially 2 and 3) and with frequency of medications being taken. As we can observe only the data correlation, it is impossible to judge what is the core reason for that. During the competition our hypothesis was that the patients that had more severe symptoms during the first examination (UPDRS scores parts 2 and 3) were more likely to get invited for a visit after 6 months and more likely to get medications prescribed. But for model training it was important that the patients that made a visit on the 6th month, have higher UPDRS scores on average. The same is true for an 18th month visit as well, but these 2 features are correlated. I personally wonder if presence / absence of these variables in the models are the reason for the private LB cliff effect around 20th place.</p> <p>Another curious effect related to it is observed for the forecasts made at visit_month = 0. If you look at the model forecasts for 0, 12 and 24 months ahead, they are consistently lower than the forecasts 6 months ahead. It is very logical from the math point of view - if a patient will show up on the 6th month, they will have higher UPDRS scores on average, and if not - the forecast will be ignored. But such model behaviour is unreasonable from a clinical point of view of course.</p> <p>It was also important to pay attention to the differences between training and test datasets as e.g. nicely summarized <a href="https://www.kaggle.com/competitions/amp-parkinsons-disease-progression-prediction/discussion/403045" target="_blank">here</a>. That, for instance, explains well why adding a feature indicating the visit on the 30th month could improve the CV, but ruin LB.</p> <h2>What didn’t work</h2> <p>Blood test data. We’ve tried many approaches to add proteins and/or peptides data to our models, but none of them improved CV. We narrowed it down to a bag of logistic regressions that forecasted the visit on the 6th month based on the blood test result on the 0th month. We applied soft up/down scaling of model-based predictions for patients that were more/less likely to visit on the 6th month based on the logistic regression probabilities. It worked on public LB after tuning a couple of “magic” coefficients directly on public LB itself. That gave us a boost all the way up to the second place on public LB, but it was clearly an overfit. We chose a “mild” version of that approach as our second final submission. It scored worse on private LB than the other submission, but, interestingly enough, not by as much as one could have expected (60.0 vs 60.3).</p> <p>Thanks to everyone who participated in the competition, those who kept many interesting discussions going on the forum and those who suggested improvements! And congrats to all the winners!</p>
Parkinson's Freezing of Gait Prediction
52nd Place Solution + Code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Parkinson's Freezing of Gait Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to the hosts for providing such an interesting/worthwhile problem to work on, and to everyone who shared their code and insights throughout the competition! This is my first completed Kaggle competition; I feel like I spent as much time learning about the Kaggle ecosystem as I did about the FOG data 😂 but overall it was a fun experience.</p> <p>My solution was a fairly straightforward 1D CNN ensemble, but I still wanted to share it!</p> <p>My code: <a href="https://www.kaggle.com/code/abandura/fog-1d-cnn" target="_blank">https://www.kaggle.com/code/abandura/fog-1d-cnn</a></p> <p>I referenced <a href="https://www.kaggle.com/code/coderrkj/parkinson-fog-pred-conv1d-separate-tf-model" target="_blank">this</a> public notebook when I began the comp, and wanted to thank the author for sharing their work.</p> <h2>Solution Overview</h2> <h3>Preprocessing</h3> <p>TS window dimensions:</p> <pre><code>window_size = window_future = window_past = window_size - window_future </code></pre> <p>TS Features (standardized for each file):</p> <pre><code>Raw 3D accelerometer data Seperate 3D acc data into high freq low freq components MA Spectrogram of 3D accelerometer data (NFFT = ) Temporal location of sequence**: time_index/total_time </code></pre> <p>**Some people feel that using temporal features reduces model usability, but I disagree. These models could be useful for auto-labeling more data from the same FOG-inducing protocols, in which case temporal features would be consistent to the data we used in this competition. On the flip side, I doubt these models would generalize well to something like the daily living dataset regardless of whether temporal features were leveraged. Just my two cents.</p> <p>CV strategy: 6 fold on patients, separating patients so each fold has roughly the same amount of data</p> <h3>Model Architecture</h3> <ul> <li>The best models based on CV for each fold were ensembled using a weighted average.</li> <li>There are batch normalization layers between each convolutional layer even though they aren't included in the diagram.</li> </ul> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F13474157%2F6746c1774c63e26d52986939d718f401%2FFOG%20CNN%20Architecture.jpeg?generation=1686624959127555&amp;alt=media"></p> <h3>Training Parameters</h3> <ul> <li>batch_size = 1024</li> <li>learning_rate = 1e-4</li> <li>num_epochs = 1</li> <li>max_batches_per_epoch = 12000</li> <li>max_val_batches = 500</li> <li>val_freq = 500</li> <li>loss function:<br> For tdcsfog I used cross entropy loss with inverse class frequency weights<br> For defog I used focal loss with gamma = 5</li> </ul> <p><strong>Public LB</strong>: 0.362<br> <strong>Private LB</strong>: 0.305</p> <h4>Other Ideas I tried that didn't improve accuracy:</h4> <ul> <li>Combined models: single CNN model for both tdcsfog and defog after unifying sampling rates and units. </li> <li>Multi-head CNN model: Add a second loss function for presence/absence of FOG event so that the notype data could be leveraged.</li> <li>CNN features as inputs for a Light GBM model: Use the output of the convolutional blocks as input to a GBM model, along with other temporal and patient features.</li> </ul>
AMP®-Parkinson's Disease Progression Prediction
9th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: AMP®-Parkinson's Disease Progression Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, I would like to express my gratitude to everyone who organized this competition, thank you! Given the small amount of data, the competition was quite uncertain, but I'm glad that I was able to achieve good results.<br> Now, I will describe my solution below.</p> <h1>Solution Overview</h1> <p>The important points of my solution are as follows.</p> <h2>Rule-based Patient splitting (most important)</h2> <p>I was looking at train data and noticed that healthy patients with very low updrs values are tested only every other year (0, 12, 24, 36, 48, 60 month…) and semi-annual data (6, 18, 30, 42, 54 month ..) were not present.<br> This was also the case in the test data. Patients without data at either the 6th or 18th month would see a significant improvement in LB by lowering the UPDRs value.</p> <h2>Modeling using only visit_month</h2> <p>Modeling basically uses only visit_month feature. However, the "healthy patients with very low updrs values" mentioned in (1) always have low updrs even after a month has passed, so these patients are removed from the train for modeling. Supplemental data is also used, but data for patients with month = 5 and patients with only month = 0 data are removed.</p> <p>The modeling used below.</p> <ul> <li>simple linear regression (referred <a href="https://www.kaggle.com/code/ambrosm/pdpp-quadratic-trends" target="_blank">AmbrosM notebook</a>.) </li> <li>catboost regression with loss as huber loss</li> <li>catboost regression with loss as mae loss<br> Three types of regression were created and weighted averaged to obtain a higher CV.<br> For validation, group k fold was performed by aligning the distribution of target as much as possible in each fold.</li> </ul> <h2>Submit Selection</h2> <p>For the last submit, the following two sub were selected.</p> <p>A. LB and CV Best sub: applying approaches 1) to 2), both LB and CV is high and align (CV:54.41 LB:54.4).<br> B. only CV Best sub: For patients for whom protein/peptido is available in the test data, using the results of the model with protein / peptido features without using the results of visit_month. In this case, CV improves by about 0.4, but LB decreases by the same amount.(CV:54.02 LB:54.9)</p> <p>As a result, sub A was better score(Private:60.5). sub B score is 61.2.</p> <p>In summary, it was a very uncertain competition, but by looking carefully at the data, and carefully adopting an approach that improves both CV and LB, I can achieve good results.</p> <p>Basically I did all the work in kaggle notebook, so I am publishing the code. However, it has not been refactored, so readability is poor.<br> <a href="https://www.kaggle.com/code/mhyodo/amp-last-submit-v1" target="_blank">Code</a></p>
Benetech - Making Graphs Accessible
2nd Place Solution [Updated with Code Link]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Benetech - Making Graphs Accessible <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, I would like to thank the Kaggle community for sharing great ideas and engaging discussions. Special shout outs to <a href="https://www.kaggle.com/nbroad" target="_blank">@nbroad</a> and <a href="https://www.kaggle.com/brendanartley" target="_blank">@brendanartley</a>. Congratulations to the winning teams - looking forward to your write ups.</p> <p>Thank you Kaggle and hosts for organizing this interesting competition.</p> <h2>Links</h2> <ul> <li>Inference notebook: <a href="https://www.kaggle.com/code/conjuring92/a05-mga-split-pipe" target="_blank">https://www.kaggle.com/code/conjuring92/a05-mga-split-pipe</a></li> <li>Github Repo: <a href="https://github.com/rbiswasfc/benetech-mga" target="_blank">https://github.com/rbiswasfc/benetech-mga</a></li> </ul> <p>The following is a detailed summary of my solution:</p> <h2>1 Overview</h2> <p>My solution is entirely based on image-to-text models finetuned from the <code>google/matcha-base</code> backbone. The training pipeline, as depicted below, is a sequence of two phases. In the first phase, I leveraged a large number of synthetic plots to adapt the backbone for the current task. In the second phase, I used oversampled extracted / non-generated plots to specialize the pipeline for real world graphs. In this phase, I created separate models for scatter and non-scatter plots, primarily to mitigate the difficulty in predicting scatter data points.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2125251%2F232cced3276ad6561fe6a56d2e7509ad%2Fmga_flow.png?generation=1687269107458839&amp;alt=media" alt=""></p> <h2>2 Model</h2> <p>All models share the same architecture (image-to-text transformer) and input-output pattern. Model input is simply the plot image itself without any prompts. The output text has the following template:</p> <p><code>&lt;|bos|&gt; &lt;|chart_type_start|&gt; {chart_type} &lt;|chart_type_end|&gt; &lt;|num_point_start|&gt; {n_x} | {n_y} &lt;|num_point_end|&gt; &lt;|x_span_start|&gt; {x0} | {x1} | {x2} | … | {xn} &lt;|x_span_end|&gt; &lt;|y_span_start|&gt; {y0} | {y1} | {y2} | … | {ym} &lt;|y_span_end|&gt; &lt;|eos|&gt;</code></p> <p>Some minor details:</p> <ul> <li>Numeric values are cast into scientific notation using <code>val = "{:.2e}".format(float(val))</code>.</li> <li>Added histogram as additional chart type, which later converted to vertical_bar during post processing</li> </ul> <h2>3 Data</h2> <p>Apart from the competition data, I used the following sources.</p> <h3>Synthetic Dataset</h3> <p>I spent majority of my competition time creating the synthetic dataset. For the underlying data in synthetic plots, I used</p> <ul> <li>Wikitables data i.e. tables from wikipedia (25%)<ul> <li><a href="http://websail-fe.cs.northwestern.edu/TabEL/" target="_blank">http://websail-fe.cs.northwestern.edu/TabEL/</a></li> <li><a href="https://github.com/sunlab-osu/TURL" target="_blank">https://github.com/sunlab-osu/TURL</a></li></ul></li> <li>Synthetic XY data (75%)<ul> <li>Categorical series: I created a list of categories using wikipedia glossary pages in STEM domain (<a href="https://www.kaggle.com/code/conjuring92/w03-stem-glossary/notebook" target="_blank">https://www.kaggle.com/code/conjuring92/w03-stem-glossary/notebook</a>)</li> <li>Numerical series: random function generators ensuring all combinations of narrow to wide min-max range, small (1e-6) - large values (1e6), inclusion of outliers etc</li></ul></li> </ul> <p>I generated the plots using matplotlib ensuring they capture all aspects of the graph conventions. For example, in the case of line plots, the generated graphs included shared origin plots, having tick labels that are not included in the data-series, additional point markers in between two tick labels, unequal spacing between x tick labels etc. I tried to maximize the diversity in the graphs by -</p> <ul> <li>customizing tick marks, tick labels, tick direction, formatting of numerical tick labels (e.g. scientific notation, different rounding, European style formatting, adding % as suffix, currency symbol as prefix etc), major / minor grids, titles, axis limits, spines (e.g. setting bottom spine at y=0), legends, markers etc</li> <li>grayscale, background textures, aspect ratio (very wide to very narrow)</li> <li>number of data points (4-24 points for non-scatter, 4-64 for scatter)</li> <li>random special effects: text boxes with stats on y values, horizontal / vertical bands, insets, random texts, random equations, annotating min/max points, error bands, adding random shapes, varying line width, color, data point marker size, hatches, error bars, slopes etc</li> <li>log-log, semi-log plots, reverse y axis (although these weren’t part of test set, my hypothesis (?) was they would help in model generalization)</li> </ul> <p>The synthetic dataset consisted of</p> <ul> <li>100k horizontal bars</li> <li>100k vertical bars + histograms</li> <li>100k dot plots</li> <li>200k line plots</li> <li>200k scatter plots</li> </ul> <h3>Synthetic Dataset - Bartley</h3> <ul> <li>Random selection of 25k data points from the synthetic dataset shared by <a href="https://www.kaggle.com/brendanartley" target="_blank">@brendanartley</a>: <a href="https://www.kaggle.com/datasets/brendanartley/benetech-extra-generated-data" target="_blank">https://www.kaggle.com/datasets/brendanartley/benetech-extra-generated-data</a></li> </ul> <h3>Pseudo Labelling</h3> <p>I took screenshot of around 700 images from wikimedia commons (e.g. <a href="https://commons.wikimedia.org/w/index.php?search=line+plots&amp;title=Special:MediaSearch&amp;go=Go&amp;type=image)" target="_blank">https://commons.wikimedia.org/w/index.php?search=line+plots&amp;title=Special:MediaSearch&amp;go=Go&amp;type=image)</a>. I used pseudo labelling, followed by manual correction, to generate the annotations.</p> <h3>ICDAR dataset</h3> <p>I used around 1100 images from ICDAR, only those having 1 XY series (250 horizontal bar + 450 vertical bar + 250 lines + 150 scatter). I also did post-processing to ensure annotations match competition graph conventions (e.g. handling percentages, interpolation of line plot data to match tick labels etc).</p> <h2>4 Datamix</h2> <ul> <li>Datamix 1: used for domain adaptation</li> </ul> <table> <thead> <tr> <th>Dataset</th> <th>Size</th> <th>Multiplier</th> <th>Effective Size</th> </tr> </thead> <tbody> <tr> <td>Competition Dataset - Synthetic</td> <td>60k</td> <td>3</td> <td>180k</td> </tr> <tr> <td>Competition Dataset - Extracted</td> <td>1.1k</td> <td>16</td> <td>17k</td> </tr> <tr> <td>Synthetic Dataset - Self</td> <td>700k</td> <td>1</td> <td>700k</td> </tr> <tr> <td>Synthetic Dataset - Bartley</td> <td>25k</td> <td>1</td> <td>25k</td> </tr> </tbody> </table> <ul> <li>Datamix 2: scatter specialization</li> </ul> <table> <thead> <tr> <th>Dataset</th> <th>Size</th> <th>Multiplier</th> <th>Effective Size</th> </tr> </thead> <tbody> <tr> <td>Competition Dataset - Synthetic</td> <td>11k</td> <td>1</td> <td>11k</td> </tr> <tr> <td>Competition Dataset - Extracted</td> <td>0.2k</td> <td>16</td> <td>3.2k</td> </tr> <tr> <td>Synthetic Dataset - Self</td> <td>30k</td> <td>1</td> <td>30k</td> </tr> <tr> <td>Pseudo Labelled Dataset</td> <td>0.1k</td> <td>16</td> <td>1.6k</td> </tr> <tr> <td>ICDAR Dataset</td> <td>0.2k</td> <td>16</td> <td>3.2k</td> </tr> </tbody> </table> <ul> <li>Datamix 3: non-scatter specialization</li> </ul> <table> <thead> <tr> <th>Dataset</th> <th>Size</th> <th>Multiplier</th> <th>Effective Size</th> </tr> </thead> <tbody> <tr> <td>Competition Dataset - Synthetic</td> <td>48k</td> <td>1</td> <td>48k</td> </tr> <tr> <td>Competition Dataset - Extracted</td> <td>0.9k</td> <td>8</td> <td>7.2k</td> </tr> <tr> <td>Synthetic Dataset - Self</td> <td>20k</td> <td>1</td> <td>20k</td> </tr> <tr> <td>Pseudo Labelled Dataset</td> <td>0.5k</td> <td>8</td> <td>4k</td> </tr> <tr> <td>ICDAR Dataset</td> <td>1k</td> <td>8</td> <td>8k</td> </tr> </tbody> </table> <h2>5 Training</h2> <p>The main hyper-parameters for training were <code>max_patches</code> and <code>max_length</code>. I used the following settings at various phases of training:</p> <h5>Phase 1 Training</h5> <ul> <li>max_patches: 2048</li> <li>max length: 1024</li> <li>lr: 5e-5</li> <li>batch size: 2</li> <li>gradient accumulation: 16</li> </ul> <h5>Phase 2 training - non-scatter</h5> <ul> <li>max_patches: 4096</li> <li>max length: 512</li> <li>lr: 2e-5</li> <li>batch size: 4</li> <li>gradient accumulation: 2</li> </ul> <h5>Phase 2 training - scatter</h5> <ul> <li>max_patches: 3072</li> <li>max length: 1024</li> <li>lr: 2e-5</li> <li>batch size: 8</li> <li>gradient accumulation: 1</li> <li>AWP</li> </ul> <p>As minor detail, I used Exponential Moving Average (EMA) of model weights, gradient clipping, cosine scheduler with liner warmup during training.</p> <h2>6 Augmentation</h2> <p>Since I was repeating the extracted images many times, I decided to include the following augmentation</p> <pre><code>transforms = A.Compose( [ A.OneOf( [ A.RandomToneCurve(=0.3), A.RandomBrightnessContrast( brightness_limit=(-0.1, 0.2), contrast_limit=(-0.4, 0.5), =, ), A.HueSaturationValue( hue_shift_limit=(-20, 20), sat_shift_limit=(-30, 30), val_shift_limit=(-20, 20) ) ], =0.5, ), A.OneOf( [ A.MotionBlur(=3), A.MedianBlur(=3), A.GaussianBlur(=3), A.GaussNoise(var_limit=(3.0, 9.0)), ], =0.5, ), A.Downscale(=, =0.1, =0.90, =0.99), ], =0.5, ) </code></pre> <h2>7 References</h2> <ul> <li>Matcha Paper: <a href="https://arxiv.org/pdf/2212.09662v2.pdf" target="_blank">https://arxiv.org/pdf/2212.09662v2.pdf</a></li> <li><a href="https://www.kaggle.com/code/nbroad/donut-train-benetech" target="_blank">https://www.kaggle.com/code/nbroad/donut-train-benetech</a></li> <li><a href="https://www.kaggle.com/code/nbroad/donut-infer-lb-0-44-benetech" target="_blank">https://www.kaggle.com/code/nbroad/donut-infer-lb-0-44-benetech</a></li> <li><a href="https://www.kaggle.com/datasets/brendanartley/benetech-extra-generated-data" target="_blank">https://www.kaggle.com/datasets/brendanartley/benetech-extra-generated-data</a></li> <li>AWP: <a href="https://www.kaggle.com/code/wht1996/feedback-nn-train/notebook" target="_blank">https://www.kaggle.com/code/wht1996/feedback-nn-train/notebook</a></li> </ul> <p>Sorry for the long write up. Please let me know if you have any queries / suggestions. I plan to release all scripts, configs and datasets by next week. [Edit: code released at <a href="https://github.com/rbiswasfc/benetech-mga" target="_blank">https://github.com/rbiswasfc/benetech-mga</a>]</p>
Benetech - Making Graphs Accessible
7th place solution - no external data
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Benetech - Making Graphs Accessible <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>For this competition, I chose to use a pipeline of multiple models instead of an end-to-end model (such as Donut, Deplot, or Matcha). I made this choice because the baseline end-to-end model model I tested with Kaggle data only did not perform well, and I thought it may require a powerful machine to train and cost time to generate data.</p> <h1>Models</h1> <ul> <li>Image classification: used to classify chart types.</li> <li>Text detection: used to detect x/y labels.</li> <li>Text recognition: used to read the crops (pretrained model only, no fine-tuning).</li> <li>Object Detection: used to detect x/y ticks, scatter points, and vertical and horizontal bars.</li> <li>Object Segmentation: used to segment lines on line charts and segment vertical bars (to classify if the vertical bar is a histogram or not).</li> <li>Pretrained Deplot Model: used for cases where the pipeline cannot predict anything (applied only in one case out of 559 CV files).</li> </ul> <h1>Data</h1> <ul> <li>Use only Kaggle data.</li> <li>Split the <code>extracted</code> data into a 50% training set and a 50% validation set.</li> <li>Use all <code>generated</code> data for training set</li> <li>Finally, use all the data to train the final weights for each type of model.</li> </ul> <h1>Result:</h1> <p>Best CV - LB (with train data only): 0.871 - 0.86</p> <p>Private LB: 0.67</p> <p>Best public - private LB: 0.86 - 0.69</p> <h1>Pipeline Details</h1> <p><strong>Step 0: Chart type classification</strong></p> <p>I used a v2s backbone model to classify the charts, with a CV accuracy of 99.2%.</p> <h2>General components/steps applied for all chart types:</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1713154%2Fdc243c99a1031a1d8e4ddc2f2dbc0250%2F000b92c3b098.jpg?generation=1687320266860703&amp;alt=media" alt=""></p> <p><strong>Step 1</strong>: Detect and read x/y labels:</p> <ul> <li>First, use a text detection model to detect the polygons of x/y labels.</li> <li>Then, use a text recognition model to read the text inside the polygons.</li> <li>Post-processing:<ul> <li>For x labels, draw a horizontal line from top to bottom of the image and select the line that intersects the largest number of x labels polygons. This is because the text detection model is not always accurate.</li> <li>Repeat the process for y labels.</li></ul></li> </ul> <p><strong>Step 2</strong>: Use an object detection model to detect x/y ticks.</p> <ul> <li>Post-processing: Similar to x/y labels, I draw a horizontal line to filter x ticks and a vertical line to filter y ticks.</li> </ul> <p><strong>Step 3</strong>: Map x/y labels and x/y ticks.</p> <ul> <li>Based on the relative position between labels and ticks, I create a 1-1 mapping for these label-tick pairs based on IOU in Ox direction and ignore all other labels/ticks that can't be paired.</li> <li>Because there are cases where the x labels are not straight, to make the mapping more accurate:<ul> <li>Get the rectangles of x_label from polygons.</li> <li>Draw a rhombus with vertices at the center of the rectangle edges.</li> <li>Draw a new rectangle with the center at the highest vertex of the rhombus.</li> <li>Then create a 1-1 mapping with x_boxes similar to mapping x labels and x ticks.</li></ul></li> </ul> <h2>For a vertical bar chart</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1713154%2F729ce9a29230faa6093f8d795020f25a%2FScreenshot_2023-06-20_at_11.35.00.png?generation=1687320318358985&amp;alt=media" alt=""></p> <p><strong>Step 4:</strong> Detect the point on top of each bar using an object detection model. I will call these points "value boxes" because I predicted boxes instead of points.</p> <p><strong>Step 5:</strong> Map x ticks and value boxes.</p> <ul> <li>This step is similar to mapping x labels and x ticks.</li> </ul> <p><strong>Step 6:</strong> Get the final value of each bar.</p> <ul> <li>Project the center of each value box to Oy, then get the final value by comparing it to the two nearest y ticks' values.<ul> <li><code>value_box_value = y1_value - abs((y2_value - y1_value) / (y2_pixel - y1_pixel) * (value_y_pixel - y1_pixel))</code></li></ul></li> </ul> <p><strong>How to check if a vertical bar is a histogram:</strong></p> <ul> <li>Because all columns of a histogram always sit next to each other without space, I use a bar segmentation model to detect the bar region.</li> <li>Calculate the percentage in the Ox direction. If the segmented region (in the Ox direction) is greater than 95% of the distance between the last bar and the first bar, then it is a histogram.</li> </ul> <h2>Horizontal Bar Chart</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1713154%2F6ade62f528b74599879f6833bc737a23%2FScreenshot_2023-06-20_at_11.32.59.png?generation=1687320337730044&amp;alt=media" alt=""></p> <p>I process a horizontal bar chart follow these steps:</p> <ul> <li>Rotate the chart 90 degrees and then flip it horizontally to make it similar to a vertical bar chart.</li> <li>Use the same steps as for a vertical bar chart to process the horizontal bar chart.</li> </ul> <h2>Line Chart</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1713154%2Fe6af38a3f7ec91c67630248db8800533%2FScreenshot_2023-06-20_at_11.18.59.png?generation=1687320402512900&amp;alt=media" alt=""><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1713154%2F60bcf18082ab15baf5a1879668494d18%2FScreenshot_2023-06-20_at_11.19.19.png?generation=1687320415337921&amp;alt=media" alt=""><br> <strong>Step 4</strong>: Use a segmentation model to predict the lines in the chart. For dashed lines, connect them to make them continuous.</p> <p><strong>Step 5</strong>: Project the line onto the Ox axis and keep only the x labels and ticks that can be projected onto the line in the Ox dimension. Then, get the corresponding value boxes for the remaining x labels.</p> <p><strong>Step 6</strong>: Obtain the final value for each value box:</p> <ul> <li>Project the center of each value box onto the Oy axis.</li> <li>Compare the projected value with the values of the two nearest y ticks to get the final value.</li> </ul> <h2>For scatter chart</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1713154%2Ffd84003f92f9f973e8019ba2564e1315%2FScreenshot_2023-06-20_at_11.16.21.png?generation=1687320430238074&amp;alt=media" alt=""></p> <p><strong>Step 4</strong>: Detect scatter points using object detection models. I find the box size of scatter points by getting the largest rectangle connected component that contains the scatter point, then average all the box size in the chart to get the final box size for all scatter points. These predicted points will be referred to as "value boxes" in the following steps, in order to maintain consistency with other chart types.</p> <p><strong>Step 5</strong>: Map the x/y ticks and value boxes.</p> <ul> <li>Similar to mapping the x ticks and value boxes in a vertical bar chart, we do the same for the y ticks in the Oy dimension.</li> </ul> <p><strong>Step 6</strong>: Obtain the final value of each value box.</p> <ul> <li>Project the center of the value boxes to Ox/Oy, then obtain the final value by comparing them to the values of the two closest x/y ticks.</li> </ul> <h2>For dot chart</h2> <p>Similar to vertical bar chart, except if the x labels are numerical, we use scatter postprocessing methods instead.</p> <h3>And another post-processing functions using computer vision based algorithms, some highlights:</h3> <ul> <li>To avoid errors in text recognition and detection, keep the y-label values as the longest increasing sequence.</li> <li>Splitting words inside polygons since the pretrained text recognition model doesn't include spaces,, I use another word detection model. Then sort the words based on their geometric location can help with accuracy.</li> </ul> <p>More detail in the notebook. The inference notebook is public here: <a href="https://www.kaggle.com/code/thanhhau097a/bmga-submission-0-86" target="_blank">https://www.kaggle.com/code/thanhhau097a/bmga-submission-0-86</a></p> <p>Cheer</p>
Benetech - Making Graphs Accessible
40th place solution + code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Benetech - Making Graphs Accessible <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Code</h1> <p>I hope you find the training and inference codes neat and tidy, I did my best effort to not make a mess!</p> <ul> <li><a href="https://www.kaggle.com/code/alejopaullier/benetech-matcha-train-0-74" target="_blank">Matcha Training Code</a></li> <li><a href="https://www.kaggle.com/code/alejopaullier/benetech-matcha-inference-0-74" target="_blank">Matcha Inference Code</a></li> </ul> <h1>TL;DR</h1> <p>Here is a brief explanation of our team solution with <a href="https://www.kaggle.com/cody11null" target="_blank">@cody11null</a>.</p> <p>Our solution consisted of a pipeline of object detectors (EfficientDet) and image-encoders/text-decoders (Matcha and Donut):</p> <ol> <li>A Donut model which acts like an OCR (Object Character Recognition) model is used to extract the axis labels of the chart (red)</li> <li>An object detector (EfficientDet) detects the data series (green). In the end, we only used this for scatter plots.</li> <li>A second object detector (EfficientDet) detects the plot's chart area (area where the data points are)(purple).</li> <li>Knowing the chart's limits from (3), the bounding boxes of the data series from (2) and the axis numerical values from (1) you can with a simple cross multiplication get the data series values. This approach is essentially as described in my <a href="https://www.kaggle.com/competitions/benetech-making-graphs-accessible/discussion/396773" target="_blank">discussion</a>.</li> <li>For the other chart types (<code>vertical_bar</code>, <code>horizontal_bar</code>, <code>line</code> and <code>dot</code>) we used a Matcha model.</li> </ol> <p><img src="https://i.imgur.com/SpheoRP.png"></p> <h1>Solution in detail</h1> <h3>1. Donut: Axis labels detection</h3> <p>We train different image-encoder/text-decoder architectures to detect the axis labels. I trained a Donut model (which surprisingly got the best results) and two other Matcha models which got worse performance. Despite training with different <code>max_patches</code> results were rather similar for all. Most complications were associated to shared origins in the axis labels, long sequences, floats with many decimal places, etc. In this step, the most important is to get the minimum and the maximum values right for each axis so the cross multiplication is done right.</p> <h3>2. EfficientDet: Object Detector for Data Series</h3> <p>This was one of the most time consuming parts for me. At the beggining of the competition until Nicholas shared his Donut model I didn't know that image-encoder/text-decoder models could perform that well in this competition so I focused on an object detector model which could detect well data series. To do so I lack bounding boxes training data, so I started manually labelling thousands of images. These took me a lot of days. My EfficientDet object detector did extremely well on generated images but not as well on extracted. I labeled as many images as I could. My OD has ~50% exact matches accuracy for scatter plots. It performs much better for other chart types, but since scatter plots can have <em>a lot</em> of points it's reasonable to be a tougher task.</p> <p>Here you can see some of my ODs predictions:</p> <p><img src="https://i.imgur.com/PQ1L3az.png"></p> <h3>3. EfficientDet: Object Detector for Chart bounding box</h3> <p>As part of the pipeline I had to detect the chart's bounding box so I could map pixel coordinates to real numerical values from the axis labels. This step was rather easy and its basically the same code from part (2) applied on the bounding boxes of <code>plot-bb</code> from the annotations. We had this training data available inside the JSONs files and it's a simple task so the OD achieves high accuracy.</p> <h3>4. Matcha and Donut models</h3> <p>At the beggining of the competition I started making several changes to Nicholas' Donut model and soon discovered that you could achieve higher performance by perfoming some basic postprocessing. However, even after training more epochs, perfoming data augmentation and other tricks I realised I couldn't achieve much higher performances. I then tried implementing a Matcha model and with a lot of effort I made it work thanks (again) to Nicholas which raised the issue of the GitHub discussion. Once I got the Matcha model working I tried a ton of different stuff until I could squeeze as much performance as I could. So here is a little roadmap to achieveing 0.74 in the public LB:</p> <ul> <li><strong>0.20:</strong> a model with only Object Detection and Donut for axis labels.</li> <li><strong>0.47:</strong> Nicholas model but just modifying post processing, like using max length of xy series instead of the min.</li> <li><strong>0.48:</strong> doing additional post processing like filling values with average of the data series mean values.</li> <li><strong>0.49:</strong> using a mixture of Donut and the Object Detector.</li> <li><strong>0.50</strong>: training Donut for 10 epochs instead of 5.</li> <li><strong>0.56:</strong> BOOM! big increase by making the Matcha model work. Super vanilla, no fancy stuff.</li> <li><strong>0.61:</strong> combining vanilla Matcha with my Object Detector for scatter plots.</li> <li><strong>0.64:</strong> training Matcha for more epochs (10 epochs).</li> <li><strong>0.69:</strong> training Matcha with Bartley's generated images from code. These improvement increase the performance a lot of all chart types except scatter with a relatively low number of additional images per chart type (+5k per image).</li> <li><strong>0.71:</strong> increase <code>max_patches</code> of Matcha from 512 to 1024.</li> <li><strong>0.74:</strong> trained Matcha on 100% of the extracted images instead of the 75% I usually used so I could get a 25% holdout set for validation.</li> </ul> <h1>Things that didn't work</h1> <p>So many things that didn't work! I will write the ones I can remember:</p> <ol> <li>I tried training at an early stage <strong>one different image-encoder/text-decoder model per each chart type</strong> instead of training a general model. I believe that this didnt work for two reasons: 1. I think I trained a donut model for this 2. I didnt have enough images (at that point I didnt use Bartley generated images).</li> <li>I tried <strong>different pre-trained weights</strong> for the Matcha model and ALL got worse results. These were: <code>statista</code>, <code>chartqa</code>, <code>plotqa</code>. Best performance came from <code>matcha-base</code>.</li> <li>Tried training <strong>two Matcha models for each axis</strong>. I really thought this idea could work and until today I don't know why it didnt! My reasoning is the following: it's harder to predict a longer sequence, so why don't I train a model to predict x-axis series and another for y-axis? In the end it gave worse results than a combined model.</li> <li>Tried <strong>training with A LOT of generated images</strong>. I tried training with additional 5k, 10k and 25k more images per chart type. So essentially I trained the model with the competitions 60k plus the additional generated images. So I trained some models with a total number of 180k images! What I soon realised was that if there is not that much variance in additional generated images no matter how many images you add the model will have reached its capacity. I discussed this <a href="https://www.kaggle.com/competitions/benetech-making-graphs-accessible/discussion/415470#2294381" target="_blank">here</a>, which relates to the Law of Diminishing returns.</li> <li>Trained with <strong>Balance Sampler/Oversampling</strong>. I trained with a Balance Sampler, which guarantees that on every batch, at least one minority class image will be present. My minority class images are the extracted images. This has two effects: it oversamples the minority class and (in theory) makes the model converge faster. I didn't see any benefit from oversampling (no surprise here).</li> <li><strong>Data augmentation</strong>. I tried augmenting data mostly by applying color-related augmentations like <code>RGBShift</code>, <code>RandomBrightness</code>, <code>ColorJitter</code>, etc. </li> <li><strong>Increased the <code>max_patches</code> parameter</strong> to 1536 (halfway between 1024 and 2048). So, as I saw an improvement by increasing <code>max_patches</code> from 512 to 1024 I thought, why don't I increase it a bit more and see what happens? Well, it didn't seem to improve the score and of course consumed more VRAM and computation time. Increasing <code>max_patches</code> increases a lot the VRAM consumption so the batch size had to be lowered and training took longer.</li> <li>Trained with <strong>different schedulers</strong>. I ended up training with <code>OneCycleLR</code> but before I tried some other, even constant LR, and never saw an increase in performance by modifying the LR scheduler.</li> <li>For the Object Detector I tried using <strong>Weighted Boxes Fusion</strong> to increase the exact matches of the data series. One of the greatest challenges of this competition is to get the exact number of data series points right. My OD sometimes produced an excess of low confident bounding boxes. I thought that maybe if I could fuse them with higher confidence bounding boxes I could get higher accuracies but couldn't quite make it. I ended up tunning the probability threshold for keeping/discarding the bounding boxes. In the end, bounding boxes with p&gt;0.22 where kept. Would be glad to know how anyone did this!</li> <li>Different Object Detector backbones. I used <code>tf_efficientnetv2_s</code> as my OD's backbone, but tried larger models without match success like <code>tf_efficientnetv2_l</code>.</li> <li><strong>Automatic Mix Precision</strong>. Not that it didn't work but didn't provide any performance by training with mixed precision using brain floating point tensors (<code>bfloat16</code>). However, since you use less VRAM compared to <code>float32</code> you can train with larger batch sizes which is a recommended practice (I heard that Karpathy said that it's always the best to train with as larger batches as you can, I may be wrong here).</li> <li>Many other things that I don't remember right now.</li> </ol> <h1>Hardware</h1> <p>It's barely impossible to train models like Matcha and Donut with the P100's provided by Kaggle. We ended up paying Google Colab Pro+ which provides A100 GPUs with 40 GB of VRAM since neither of us had DL GPUs like an RTX 3090/4090. With the A100 we were able to train large models and run many experiments faster. I trained more than 30 different models!</p> <h1>Conclusions</h1> <p>Even though I would have liked to get inside the gold zone (always hoping for the best) I am grateful for the result we obtained and mainly for all the lessons learned throughout the competition. I firmly believe that if you want to learn something try changing it! Code something from scratch! Best way to learn by far. I am really looking forward to read other teams solutions. I would also like to know if anyone tried changing the Vision Model and the Text Model from Pix2Struct. I don't know if it's even possible or how its done so if you know please leave in the comments.</p> <p>Thanks all!</p>
Benetech - Making Graphs Accessible
5th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Benetech - Making Graphs Accessible <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Many thanks to Kaggle and Benetech for this interesting competition where many different approaches are possible and ideas are endless. Given the strong distribution shift in the private test set, we are quite happy with our finish.</p> <p><strong>Summary</strong></p> <p>Our solution consists of 3 main components:</p> <ul> <li>Synthetic data generation.</li> <li>3-stage training of matcha models.</li> <li>Separate process to handle scatter plot.</li> </ul> <p><strong>1. Synthetic data generation</strong></p> <p>Early on we realized this competition is unique in the sense that we are not limited to the available dataset. If the model struggles with certain plot patterns, we can generate a large number of plots with these patterns and as the model (matcha) is very strong, it will learn.</p> <p>We started by building upon this amazing repo: <a href="https://github.com/rakutentech/chart-synthesizer" target="_blank">https://github.com/rakutentech/chart-synthesizer</a>. We used both competition’s extracted data and ICDAR data as validation and try to emulate as many patterns from them as possible. For example:</p> <ul> <li>Different fonts, tick orientations, tick styles, background colors, grid styles etc.</li> <li>Add error bars to bar and line plots.</li> <li>Generate histogram plots as a separate chart type.</li> <li>Difficult line plot patterns, such as when the line starts very close to a tick mark but doesn’t touch it, the model without additional training data will very likely include this tick value in the prediction. </li> <li>Add blur and noise effects to reproduce the look of extracted plots.</li> </ul> <p>Some examples of our generated data:<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F9250575%2F93bb591a9717e095b9f422c224b31c02%2Fline61.jpg?generation=1687298963057596&amp;alt=media" alt="“”"><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F9250575%2F6f24a6ba2660e0440b150164d6390ab6%2Fvbar3.jpg?generation=1687298981955314&amp;alt=media" alt="“”"></p> <p>With each round of training, we would analyze the validation set, locate the patterns that give low score, and add these patterns to our chart generation code. We made the rise from 0.74 to 0.82 public LB simply by repeating this process multiple times.</p> <p>With our generation code we can create as many bar, line and scatter plots as necessary for each training stage. For dot plot we sampled 10k images from the great dataset provided by <a href="https://www.kaggle.com/brendanartley" target="_blank">@brendanartley</a> </p> <p><strong>2. Three-stage training of matcha models</strong></p> <p>Thanks to the kind sharing by <a href="https://www.kaggle.com/nbroad" target="_blank">@nbroad</a> , we started the competition training <code>matcha-base</code> models and quickly reached a good score.</p> <p>The training is done in 3 stages:</p> <ul> <li>Train from <code>matcha-base</code> checkpoint as a chart classifier. The model is trained for 5 epochs, using little synthetic data and oversampling extracted data.</li> <li>Use the weight from stage 1, continue training for 10 epochs to extract data series from all chart types. In this step we add a lot of our own synthetic data (~150k images) to combine with the original dataset.</li> <li>Use the weight from stage 2, finetune separate models for 5 epochs with each chart type group:<ul> <li>vertical bar/dot (add 50k synthetic vertical bar plots, 5k histograms, 5k dot plots)</li> <li>horizontal bar (add 50k synthetic plots)</li> <li>line (add 200k synthetic plots)</li> <li>scatter (add 30k synthetic plots)</li></ul></li> </ul> <p>At inference time, the chart classifier is run first and then each group of charts will be handled by its dedicated model.</p> <p>At the beginning our process had only step 2 and 3, and we used model from step 2 as classifier. But then we observed that adding a lot of synthetic data decreases the performance of classifying task. As a result, training for the classifier was separated.</p> <p>It is also possible to perform classifying task with a simple CNN, however from our early experiments we saw that using matcha gave a little better performance, so we sticked with its usage.</p> <p><strong>Some training details:</strong></p> <ul> <li>Learning rate 3e-5 for 1st and 2nd stages, 2e-5 for 3rd stage, using cosine schedule with warmup.</li> <li>Adafactor optimizer.</li> <li>Max patches 2048, max length 512.</li> <li>Freeze first 4 layers of encoder.</li> <li>Prediction string: <code>&lt;chart_type&gt;&lt;start&gt;x1|y1;…;xn|yn&lt;end&gt;</code></li> <li>Histogram is treated as a separated chart type in training and mapped back to vertical bar at inference.</li> <li>Dynamic rounding based on the range of data series. </li> </ul> <p><strong>3. Scatter plots processing</strong></p> <p>While the mentioned above method helped us reach good results for bar, dot and line charts, we found scatter plots much harder for matcha to handle. As a result, we developed a separate scheme only for scatter:</p> <ul> <li>Plot area and tick label detection: use outputs from <a href="https://github.com/pengyu965/ChartDete" target="_blank">CACHED</a> .</li> <li>Textline Rotation : use MobilenetV2 backbone to train angle classification task.</li> <li>Textline OCR: use <a href="https://github.com/pbcquoc/vietocr" target="_blank">vietocr</a> ’s seq2seq model to train textline reading task. We modified the encoder to use ResNeXt50 with dilated convolution.</li> <li>Scatter marker detection: use Mask-RCNN with <a href="https://github.com/mlpc-ucsd/CoaT" target="_blank">CoaT</a> backbone for this task. We trained the model with detectron2. </li> <li>Final values of each marker are deduced from marker position, tick label position and tick values.</li> </ul> <p>This process helped us score 0.09 public and 0.28 private for scatter and is the key for us to survive the private test set.</p> <p>Thank you for reading and please let us know if you have any questions.</p> <p>Edit: Inference notebook for our solution now available: <a href="https://www.kaggle.com/code/qdv206/benetech-5th-place-inference" target="_blank">https://www.kaggle.com/code/qdv206/benetech-5th-place-inference</a></p>
Vesuvius Challenge - Ink Detection
10th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Vesuvius Challenge - Ink Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h3>Summary</h3> <ul> <li>pretrain 3D encoder using the <a href="https://github.com/educelab/ink-id/blob/develop/inkid/model/model.py" target="_blank">ink-id InkClassifier3DCNN</a> sampled size = 64, stride = 32 (improved CV by 0.03-0.06)</li> <li>3D encoder -&gt; pool along z-axis -&gt; 2D FPN Decoder</li> <li>Simple self-implemented attention pooling</li> <li>Data Sampling -&gt; 1 region containing positive ink : 8 region has no ink at all</li> <li>Denoiser</li> </ul> <h3>Data preparation</h3> <ul> <li>5 folds, split ink_id = 2 into 3 equal regions along height</li> <li>Resolution of 224 , stride of 56, slice 20-36</li> <li>Sampled regions with ink and regions without ink using a ratio of 1:8 to generate more data, this also makes training more stable as compared to sampling with moving window</li> <li>Augmentation:</li> </ul> <pre><code>[ A.HorizontalFlip(=0.5), A.VerticalFlip(=0.5), A.RandomRotate90(=0.5), A.Affine(=0, =0.1, scale=[0.9,1.5], =0, =0.5), A.OneOf([ A.RandomToneCurve(=0.3, =0.2), A.RandomBrightnessContrast(brightness_limit=(-0.1, 0.2), contrast_limit=(-0.4, 0.5), =, =, =0.8) ], =0.5), A.OneOf([ A.ShiftScaleRotate(=None, scale_limit=[-0.15, 0.15], rotate_limit=[-30, 30], =cv2.INTER_LINEAR,border_mode=cv2.BORDER_CONSTANT, =0, =None, shift_limit_x=[-0.1, 0.1],shift_limit_y=[-0.2, 0.2], =, =0.5), A.ElasticTransform(=1, =20, =10, =cv2.INTER_LINEAR, =cv2.BORDER_CONSTANT, =0, =None, =, =, =0.5), A.GridDistortion(=5, =0.3, =cv2.INTER_LINEAR, =cv2.BORDER_CONSTANT, =0, =None, =, =0.5), ], =0.5), A.OneOf([ A.GaussNoise(var_limit=[10, 50], =0.5), A.GaussianBlur(=0.5), A.MotionBlur(=0.5), ], =0.5), A.CoarseDropout(=3, =0.15, =0.25, =0, =0.5), A.Normalize( mean=[0]*in_chans, std=[1]*in_chans, ), ToTensorV2(=), ] </code></pre> <h3>Model</h3> <ul> <li><a href="https://github.com/kenshohara/3D-ResNets-PyTorch" target="_blank">resnet3D</a> as backbone</li> <li>Attention pooling</li> </ul> <pre><code> (torch.nn.Module): (): ().__init__() .attention_weights = nn.Parameter(torch.ones(, , depth, height, width)) .softmax = nn.Softmax(dim=) (): attention_weights = .softmax(.attention_weights) pooled_output = torch.mul(attention_weights, x) pooled_output = torch.sum(pooled_output, dim=) pooled_output </code></pre> <ul> <li>FPN as decoder</li> <li>Denoiser, this was inspired by the diffusion model, whereas the model predicts the noise</li> </ul> <pre><code> cfg.use_denoiser: self.denoiser = smp.Unet( =, # =, =1, =1, =None, ) self.cfg.use_denoiser: noise = self.denoiser(masks) masks = masks - noise </code></pre> <ul> <li>loss: bce</li> </ul> <h3>Training</h3> <ul> <li>amp</li> <li>adamw</li> </ul> <h3>Inference</h3> <ul> <li>tta: rot90 * 3 + original</li> <li>threshold mainly determined by cv, tried to use a percentile of 93 but did not use as I was unsure about the ink composition in private dataset.</li> </ul> <h3>General pipeline</h3> <ul> <li>pretrain with ink-id InkClassifier3DCNN -&gt; save encoder -&gt; load into segementation model -&gt; train segmentation model -&gt; inference</li> </ul> <h3>Result</h3> <ul> <li>CV: 0.685 </li> <li>public LB: 0.69 </li> <li>private LB: 0.65</li> </ul> <h2>Code</h2> <ul> <li><a href="https://www.kaggle.com/code/fengqilong/vesuvius-inference" target="_blank">Inference</a></li> <li><a href="https://github.com/fengql123/kaggle-vesuvius-10th-place-solution/tree/main" target="_blank">Training Code</a></li> </ul>
Benetech - Making Graphs Accessible
4th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Benetech - Making Graphs Accessible <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations to all the winners and thanks to the organizers for hosting this interesting competition.<br> I would also like to thank my teammate <a href="https://www.kaggle.com/fuumin621" target="_blank">@fuumin621</a> for working with me on this challenging task.</p> <h1>SUMMARY</h1> <p>Similar to other solutions, our pipeline also consists of object detection for scatter points and end-to-end plot prediction by deplot for other graphs.<br> <strong>Scatter plots</strong>: plot-bb detection with YOLOX + scatter point detection with YOLOX + OCR of max/min axis labels with matcha<br> <strong>Other graphs</strong>: deplot trained with original synthetic data</p> <h1>Scatter point detection</h1> <p>We used the following two steps object detection to detect scatter points and determined the coordinates of scatter points from the axis labels read by matcha.</p> <p>Step 1. plot-bb detection by YOLOX (model size : x, image size : 640)<br> Step 2. scatter points detection with "plot-bb cropped" image by YOLOX (model size : x, image size : 960)</p> <p>Training data for step.2 is the competition data (generated, extracted)+ 538 scatter plots from ICDAR dataset.</p> <p>Public LB : Private LB = 0.09 : 0.29</p> <h1>Bar graph, line graph, dot graph</h1> <p>We used a variety of graph data sources for training deplot,</p> <ol> <li>generated graph (competition data)</li> <li>extracted graph (competition data)</li> <li>ICDAR dataset (1185 single series + 2992 multiple series)</li> <li><a href="https://www.kaggle.com/brendanartley" target="_blank">@brendanartley</a> 's 500k dataset (except scatter)</li> <li>original synthetic dataset (vertical bar: 40000, horizontal bar: 40000, line: 40000)</li> <li>original synthetic dataset (histogram : 10000)</li> </ol> <h2>original dataset (generated by matplotlib)</h2> <p>Our key focus of this competition is data generation.<br> We generated graphs with various patterns to make our model robust against unknown private dataset.<br> The generated graphs were evaluated using the competition dataset (extracted) and the ICDAR dataset.<br> Continuous graph modification and feedback from error analysis allowed predictions to be made on difficult data.<br> Our data was especially helpful in improving the accuracy of line graphs (so it is very important whether the line is on the tick label or not!).</p> <p>Some attempts for improving common mistakes:</p> <ul> <li>line plot with no value on some tick labels</li> <li>JPG compression to lower quality (simulate low resolution and blurred characters)</li> <li>add symbols ($ % , .) to distinguish numbers and symbols (like 8 and $)</li> <li>extended predicted digits for small values (like &lt; 0.0001)</li> <li>separate labels into multiple lines</li> <li>etc..</li> </ul> <p>Some examples of our generated graphs are shown below.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6102861%2F8e95529288fbfda2462e21336fb43b5b%2F00000008.jpeg?generation=1687351996558599&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6102861%2F392cab76953662bed1d6fc6f17be4e59%2F00000054.jpeg?generation=1687352984294446&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6102861%2F2dfff51c40cb57f1793b4c56fbea2560%2F00000042.jpeg?generation=1687353205159183&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6102861%2F584b0585fa82beffd0d3cd771c347bd3%2F00000081.jpeg?generation=1687357868152994&amp;alt=media" alt=""></p> <h2>Training (deplot)</h2> <h3>training data configuration</h3> <table> <thead> <tr> <th>source</th> <th># of samples per epoch</th> <th>total graphs</th> </tr> </thead> <tbody> <tr> <td>generated</td> <td>1000</td> <td>50000</td> </tr> <tr> <td>extracted</td> <td>953</td> <td>953</td> </tr> <tr> <td>ICDAR</td> <td>1000</td> <td>4177</td> </tr> <tr> <td>500k dataset</td> <td>1000</td> <td>400000</td> </tr> <tr> <td>original synthetic data</td> <td>2000</td> <td>120000</td> </tr> <tr> <td>original synthetic data (histogram)</td> <td>100</td> <td>10000</td> </tr> </tbody> </table> <p>For each epoch, we randomly sampled the graphs for each source at the above ratios.</p> <h3>training conditions</h3> <p>epochs: 50<br> learning rate: 2e-4<br> scheduler: cosine annealing<br> batch (accumulation): 64</p> <h1>Final results</h1> <table> <thead> <tr> <th>chart-type</th> <th>Public LB</th> <th>Private LB</th> </tr> </thead> <tbody> <tr> <td>all</td> <td>0.86</td> <td>0.69</td> </tr> <tr> <td>vertical bar</td> <td>0.39</td> <td>0.26</td> </tr> <tr> <td>horizontal bar</td> <td>0.04</td> <td>0.01</td> </tr> <tr> <td>line</td> <td>0.32</td> <td>0.12</td> </tr> <tr> <td>scatter</td> <td>0.09</td> <td>0.29</td> </tr> <tr> <td>dot</td> <td>0</td> <td>0</td> </tr> </tbody> </table>
IceCube - Neutrinos in Deep Ice
10th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IceCube - Neutrinos in Deep Ice <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>The last few weeks have been intense, but in the end I am super happy that I've finally got my first solo gold medal. Great competition overall! It was great to see the stability between local validation and LB scores (even with using only <strong>1.5%</strong> of the data as <strong>validation</strong>).</p> <p>The model training process involved a train-validation split, wherein <strong>batches 11-660</strong> were utilized for <strong>training</strong>, and <strong>batches 1-10</strong> were allocated for <strong>validation</strong>. An ensemble of 8 models was formed using <strong>hill climbing</strong> or <strong>Nelder-Mead</strong> optimization techniques, with a submission time of approximately 3-3.5 hours. The inclusion of more models in the ensemble resulted in a better score; however, since I was submitting the best blend only a few hours before the end of the competition, there was not enough time to do inference on a bigger ensemble. The same model architectures were trained multiple times and blended.</p> <p>The ensemble's local validation score was <strong>0.97747</strong>, while the public and private LB scores were <strong>0.976</strong>.</p> <p>Input data for the model consisted of the following 9 features per event: <strong>sensor_x, sensor_y, sensor_z, time, charge, auxiliary, is_main_sensor, is_deep_veto, and is_deep_core</strong>.</p> <p>Four model architectures have been selected for the final submission.</p> <p><strong>Model 1</strong>:</p> <p>Validation score 1: <strong>1.0016</strong><br> Validation score 2: <strong>1.0005</strong><br> Loss function: <strong>CrossEntropyLoss</strong></p> <p>Data preprocessing steps:</p> <ul> <li>sensor_x, sensor_y, and sensor_z are divided by 600</li> <li>time is divided by 1000 and subtracted by its minimum value</li> <li>charge is divided by 300</li> </ul> <pre><code> (pl.LightningModule): (): self.bin_num = self.gru = nn.GRU(, , num_layers=, dropout=, batch_first=, bidirectional=) self.fc1 = nn.Sequential(nn.Linear(, ), nn.ReLU()) self.fc2 = nn.Linear(, bin_num*bin_num) (): batch_sizes = batch_sizes.cpu() x = pack_padded_sequence(x, batch_sizes, batch_first=, enforce_sorted=) x, _ = self.gru(x) x, _ = pad_packed_sequence(x, batch_first=) x = x.(dim=) x = x.div(batch_sizes.unsqueeze(-).cuda()) x = self.fc1(x) x = self.fc2(x) x </code></pre> <p>Output bins were generated using <a href="https://www.kaggle.com/code/rsmits/tensorflow-lstm-model-training-tpu" target="_blank">code</a> from <a href="https://www.kaggle.com/rsmits" target="_blank">@rsmits</a>. </p> <p><strong>Models 2-4</strong>:</p> <p>Loss function: <strong>VonMisesFisher3DLoss</strong></p> <p>Data preprocessing steps:</p> <ul> <li>sensor_x, sensor_y, and sensor_z are divided by 500</li> <li>time is scaled by subtracting 1.0e04 and dividing by 3.0e4</li> <li>charge is transformed using the logarithm base 10 and then divided by 3.0</li> </ul> <p>Validation score 3: <strong>0.9847</strong><br> Validation score 4: <strong>0.9859</strong></p> <pre><code> (pl.LightningModule): (): self.bilstm = nn.LSTM(, , num_layers=, dropout=, batch_first=, bidirectional=) self.fc1 = nn.Sequential(nn.Linear(, ), nn.ReLU()) self.dropout = nn.Dropout() self.fc2 = nn.Linear(, ) (): batch_sizes = batch_sizes.cpu() x = pack_padded_sequence(x, batch_sizes, batch_first=, enforce_sorted=) x, _ = self.bilstm(x) x, _ = pad_packed_sequence(x, batch_first=) x = x.(dim=) x = x.div(batch_sizes.unsqueeze(-).cuda()) x = self.fc1(x) x = self.dropout(x) pred = self.fc2(x) kappa = pred.norm(dim=, p=) + pred_x = pred[:, ] / kappa pred_y = pred[:, ] / kappa pred_z = pred[:, ] / kappa pred = torch.stack([pred_x, pred_y, pred_z, kappa], dim=) pred </code></pre> <p>Validation score 5: <strong>0.9872</strong><br> Validation score 6: <strong>0.9887</strong></p> <pre><code> (pl.LightningModule): (): ().__init__() self.embedding = nn.Linear(, ) self.bilstm = nn.LSTM(, , num_layers=, dropout=, batch_first=, bidirectional=) self.fc1 = nn.Sequential(nn.Linear(, ), nn.ReLU()) self.fc2 = nn.Linear(, ) (): x = self.embedding(x) batch_sizes = batch_sizes.cpu() x = pack_padded_sequence(x, batch_sizes, batch_first=, enforce_sorted=) x, _ = self.bilstm(x) x, _ = pad_packed_sequence(x, batch_first=) x = x.(dim=) x = x.div(batch_sizes.unsqueeze(-).cuda()) x = self.fc1(x) pred = self.fc2(x) kappa = pred.norm(dim=, p=) + pred_x = pred[:, ] / kappa pred_y = pred[:, ] / kappa pred_z = pred[:, ] / kappa pred = torch.stack([pred_x, pred_y, pred_z, kappa], dim=) pred </code></pre> <p>Validation score 7: <strong>0.9842</strong><br> Validation score 8: <strong>0.9841</strong></p> <pre><code> (pl.LightningModule): (): self.embedding = nn.Linear(, ) self.bilstm = nn.LSTM(, , num_layers=, dropout=, batch_first=, bidirectional=) self.fc1 = nn.Sequential(nn.Linear(lstm_units, ), nn.ReLU()) self.fc2 = nn.Linear(, ) (): batch_sizes = batch_sizes.cpu() x = self.embedding(x) x = pack_padded_sequence(x, batch_sizes, batch_first=, enforce_sorted=) x, _ = self.bilstm(x) x, _ = pad_packed_sequence(x, batch_first=) x = x.(dim=) x = x.div(batch_sizes.unsqueeze(-).cuda()) x = self.fc1(x) pred = self.fc2(x) kappa = pred.norm(dim=, p=) + pred_x = pred[:, ] / kappa pred_y = pred[:, ] / kappa pred_z = pred[:, ] / kappa pred = torch.stack([pred_x, pred_y, pred_z, kappa], dim=) pred </code></pre> <p>Hyperparameters:<br> Optimizer: <strong>Adam</strong><br> Scheduler: <strong>CosineAnnealingLR</strong><br> Batch size: <strong>2048</strong><br> Max pulses: <strong>128</strong><br> Max LR: <strong>1e-3</strong> or <strong>5e-4</strong><br> Min LR: <strong>1e-6</strong><br> Warmup steps: <strong>2000</strong><br> Epochs: <strong>10-15</strong> (possibly a few extra fine tuning epochs)</p> <p>The DL library used for the competition was PyTorch (Lightning), which delivered better results than TensorFlow. While blending multiple models in TensorFlow, the code produced some unanticipated errors which I wasn't able to debug. It was just simpler to switch to PyTorch.</p> <p>I tried a few different transformer architectures, but didn't have enough time to push it through the end. I retrained a few models similar to graphnet (<a href="https://www.kaggle.com/code/amoshuangyc/icecube-gnn-baseline-rewrite" target="_blank">code</a> from <a href="https://www.kaggle.com/amoshuangyc" target="_blank">@amoshuangyc</a> was very useful), but it didn't give a boost to the final ensemble.</p>
Google - Isolated Sign Language Recognition
Top 8% Bronze Medal Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google - Isolated Sign Language Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><blockquote> <p><strong>Many congratulations to all the winners in this competition. Huge thanks to all the discussion topics, public notebooks and datasets - most (if not all of) of my learning came from them.</strong></p> </blockquote> <p>First I would like to thank the organizers of this wonderful competition. </p> <p>To say that I learned a lot by competing in this competition would really be an understatement. I wanted to take on the challenge of using Tensorflow in this competition and boy did it test me. If there was one thing that gave me hope in this competition it was <a href="https://www.kaggle.com/markwijkhuizen" target="_blank">@markwijkhuizen</a>'s amazing public <a href="https://www.kaggle.com/code/markwijkhuizen/gislr-tf-data-processing-transformer-training" target="_blank">notebook</a>. It was the foundation to my entire work in this competition.</p> <p>Initially I was using my own Transformer architectures in this competition which were not bad themselves but as <a href="https://www.kaggle.com/tatamikenn" target="_blank">@tatamikenn</a> pointed it, the public notebooks had really good Transformer architectures, with a few tweaks and code cleanups I was able to extract good results both on CV and LB, this gave me a significant increase in CV and LB right on my first experiment. </p> <blockquote> <p>I have open sourced the code base on GitHub -&gt; <a href="https://github.com/abhinand5/isolated-sign-language-recognition" target="_blank">https://github.com/abhinand5/isolated-sign-language-recognition</a>. </p> <p>Leave a ⭐️ if you like my work.</p> </blockquote> <h3>What worked for me?</h3> <ul> <li>Flip augmentation (<code>aug_prob = 0.4</code>)</li> <li>10 Fold Cross Validation</li> <li><code>label_smoothing = 0.3</code></li> <li><code>embed_dim = 256</code></li> <li><code>input_size = 32</code></li> <li><code>num_transformer_blocks = 2</code> (Using 1 was under-fitting and 3 was over fitting even after several experiments of trying to tune)</li> <li>Attention heads: <code>MHA_HEADS = 4</code> and <code>MHA_HEADS = 8</code></li> <li>Train on the entire dataset before submission</li> <li>FP16 Quantization</li> <li>GeLU activation</li> </ul> <h3>What did NOT work for me?</h3> <ul> <li><code>input_size = 64</code></li> <li>Knowledge Distillation (wasted a lot of time in this)</li> <li>Rotate augmentations (degraded CV and LB)</li> <li>Adding Gaussian noise to the inputs (degraded CV and LB)</li> <li>Removing <code>LayerNorm</code> from Transformer blocks (although the arch is shallow it did not work for me)</li> <li>Strip Pruning </li> <li>Dynamic Range Quantization (increased the inference time)</li> <li>2D Affine transforms</li> </ul> <h3>Final Solution:</h3> <p>There were two different model architectures in the final solution. </p> <p><strong>Dataset:</strong> Only Competition Data</p> <p><strong>Model 1:</strong></p> <ul> <li>Training <code>epochs = 50</code></li> <li>Add LayerNorm</li> <li>CV Strategy: <code>StratifiedGroupKFold(k=10)</code></li> <li>Optuna tuning -&gt; Best OOF on all folds</li> <li><code>SEED = 555</code></li> <li>Ensemble of Best Folds: <code>[3,6,7,9]</code> (based on OOF score)</li> <li><code>QUANTIZE_MODEL = True</code></li> <li><code>QUANT_METHOD = "float16"</code></li> <li>No Augmentations</li> </ul> <p>Below is the complete model conf:</p> <pre><code> </code></pre> <p><strong>Model 2:</strong></p> <ul> <li>Training <code>epochs = 100</code></li> <li>Add LayerNorm</li> <li>Single Split -&gt; Optuna Tuning -&gt; Use best hyper params to train on entire train data</li> <li><code>SEED = 555</code></li> <li><code>QUANTIZE_MODEL = True</code></li> <li><code>QUANT_METHOD = "float16"</code></li> <li>Flip Augmentations (<code>aug_prob=0.4</code>)</li> </ul> <p>Below is the complete model conf:</p> <pre><code> </code></pre> <p><strong>Submission:</strong><br> The submission is a simple ensemble of 5 models </p> <pre><code>Model 1 (folds=[3,6,7,9]) + Model 2 Private Score: 0.82815 Public Score: 0.73896 </code></pre> <p>Ensembling code can be found below:</p> <pre><code> (tf.Module): (): (TFLiteEnsembleModel, self).__init__() self.preprocess_layer = preprocess_layer self.models = models self.weights = weights [/(models)] * (models) self.weights = tf.reshape(self.weights, (-, )) (): x, non_empty_frame_idxs = self.proprocess_inputs(inputs) outputs = [] _model self.models: output = _model({ : x, : non_empty_frame_idxs }) outputs.append(output) outputs = tf.concat(outputs, axis=) weighted_outputs = outputs * self.weights weighted_sum = tf.reduce_sum(weighted_outputs, axis=, keepdims=) outputs = tf.squeeze(weighted_sum, axis=) {: outputs} (): x, non_empty_frame_idxs = self.preprocess_layer(inputs) x = tf.expand_dims(x, axis=) non_empty_frame_idxs = tf.expand_dims(non_empty_frame_idxs, axis=) x, non_empty_frame_idxs </code></pre>
Predict Student Performance from Game Play
10th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predict Student Performance from Game Play <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I respect all of you for your tough and long term fight and I am glad that we were able to fight together. I also want to thank my teammates ( <a href="https://www.kaggle.com/tereka" target="_blank">@tereka</a>, <a href="https://www.kaggle.com/deepkun1995" target="_blank">@deepkun1995</a>, <a href="https://www.kaggle.com/ryotak12" target="_blank">@ryotak12</a>, <a href="https://www.kaggle.com/yurimaeda" target="_blank">@yurimaeda</a>) for their hard work.</p> <p>I'm happy because this is the first time I got a gold medal.</p> <h1>Overview</h1> <p>We're not doing anything special in our solution. We used 1 NN, 1 LightGBM and 4 XGBoost with various features for the Stage 1, and MLP and Logistic Regression stacking for the Stage 2. We used average and threshold optimization for the Stage 3.</p> <ul> <li>CV: 0.70573</li> <li>Public LB: 0.706</li> <li>Private LB: 0.702</li> </ul> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5563440%2F010fd90d184db7dc2858562025f3867a%2Fpsp_solution_overview.drawio%20(1).png?generation=1688311769280377&amp;alt=media" alt="psp_solution_overview"></p> <h2>Code</h2> <ul> <li>Inference: <a href="https://www.kaggle.com/code/shu421/psp-10thsolution-public0706-private0702/notebook" target="_blank">https://www.kaggle.com/code/shu421/psp-10thsolution-public0706-private0702/notebook</a></li> <li>shu421 XGBoost and Stacking Training: <a href="https://github.com/shu421/Kaggle_PSP_10thSolution" target="_blank">https://github.com/shu421/Kaggle_PSP_10thSolution</a></li> </ul> <h1>Models</h1> <h2>Stage 1: XGBoost (shu421 part)</h2> <p>I created XGBoost for each level_group. The base features are not so different from those in the public code. It is an aggregate feature of elapsed_time_diff and hover_duration, and other numerical features. However, in addition to these, I used previous level_group features and predicted probability as current level_group features. <br> I used numpy and numba to create them. Initially, I had used polars, but I switched to numba which is my teammate <a href="https://www.kaggle.com/yurimaeda" target="_blank">@yurimaeda</a> 's approach. The submission time was significantly reduced from 2 hours with polars to just 13 minutes with numpy and numba. I used 5-StratifiedGroupKFold as cross-validation strategy.</p> <ul> <li>CV: 0.70111</li> <li>Public LB: 0.702</li> <li>Private LB: 0.699</li> </ul> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5563440%2Faabaecb685f9740c8b542c8282791901%2Fpsp_solution_xgb.drawio%20(1).png?generation=1688311791487937&amp;alt=media" alt="psp_solution_xgboost"></p> <h2>Stage 2: Stacking</h2> <p>We created MLP and Logistic Regression for each question. Thus, there are 18 models each, and the output dimension of each model is (n_samples, 1).<br> Since stacking was very easy to overfit, we kept the model architecture simple.<br> Here is the code for MLP.</p> <pre><code> (nn.Module): (): ().__init__() self.fc1 = nn.Linear(input_size, hidden_size) self.head = nn.Linear(hidden_size, output_size) self.dropout = nn.Dropout() self.relu = nn.ReLU() (): x = self.relu(self.fc1(x)) x = self.dropout(x) x = self.head(x) x </code></pre> <h2>Stage 3: Threshold Optimization</h2> <p>We take the average of the predictions of the 2 models in the Stage 2 and optimize the threshold for each question.</p> <pre><code> numpy np sklearn.metrics f1_score scipy.optimize minimize (): y_pred_binary = (y_pred_prob &gt; thresholds).astype() score = f1_score(y_true.flatten(), y_pred_binary.flatten(), average=) score (): n_labels = y_pred_prob.shape[] init_thresholds = np.full(n_labels, ) objective = thresholds: -f1_score_macro_for_thresholds( y_true, y_pred_prob, thresholds ) result = minimize( objective, init_thresholds, bounds=[(, )] * n_labels, method=method ) result.x </code></pre> <p>We tried some optimization methods, but Powell worked best.<br> This method improved CV by 0.008.</p> <h1>What worked</h1> <ul> <li>feature engineering<ul> <li>elapsed_time_diff and hover_duration agg features was important</li></ul></li> <li>threshold optimization</li> <li>ensemble</li> <li>lstm + transformer(ryota part)</li> <li><a href="https://www.kaggle.com/competitions/predict-student-performance-from-game-play/discussion/416963" target="_blank">sort_frame</a></li> </ul> <h1>What didn't work</h1> <ul> <li>1D CNN</li> <li>stacking(below methods seemed to be overfitting)<ul> <li>CNN(1D/2D)</li> <li>RNN</li> <li>level_group preds</li></ul></li> <li>datetime agg features</li> <li>use NN embedding as gbdt features</li> <li>TimeSeriesClustering (elapsed_time_diff)</li> <li>additional data</li> </ul>
Benetech - Making Graphs Accessible
#13th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Benetech - Making Graphs Accessible <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank Kaggle and the host to organize the competition. I want to do a quick write-up of my simple solution. Everything is basic, all steps can be done using Kaggle and Google Colab (Pro) computing resources.</p> <p>Final Submission notebook: <a href="https://www.kaggle.com/code/namgalielei/benetech-eval-and-infer-v12/notebook" target="_blank">https://www.kaggle.com/code/namgalielei/benetech-eval-and-infer-v12/notebook</a></p> <h2>1. Overview</h2> <p>Chart type classification + Plot BBox prediction -&gt; OCR -&gt; Chart data instance detection / segmentation (line) -&gt; Data association </p> <h2>2. Modules</h2> <h3>a. Chart type classification + Plot BBox prediction:</h3> <ul> <li>Segmentation model pytorch Unet with backbone eca nfnet l1, auxilary classification head. I do binary segmentation for Plot BBox prediction, and softmax classifcation for Chart type classification </li> <li>Ensemble 3 folds.<br> Training code: <a href="https://colab.research.google.com/drive/1WdlqUr1ONcntWfaIksDp6Z6BD4MXDJg4?usp=sharing" target="_blank">https://colab.research.google.com/drive/1WdlqUr1ONcntWfaIksDp6Z6BD4MXDJg4?usp=sharing</a></li> </ul> <h3>b. OCR:</h3> <ul> <li>Based on Paddle ppOCRv3</li> <li>Text detection: Fine tune the light weighted mobilenetv3-dbnet on the competition's data. The target is at word group level. The ground-truth for this is provided on the json file.</li> <li>Text recognition: Use the pretrained model. </li> <li>Modify some logic: First, the default post processing of Paddle TextDet (mask to quad) may result in some polygons with only 3 points (triangle instead of quadrilateral). I fixed this by finding the minimum bounding rotated rectangle. Second, when perspective-transform a cropped text, it might be rotated, so the text recognition is not able to read them well. I predict twice (no rotation and clock-wise 90deg rotation) and take the one with higher confidence score. </li> </ul> <h3>c. Chart data detection:</h3> <ul> <li>VBar, HBar, Scatter and Dot detector: Mask RCNN Resnet50 model. Training code: <a href="https://colab.research.google.com/drive/14X97mTwAU9kxS__xaehDU7SQbe_2ivP0?usp=sharing" target="_blank">https://colab.research.google.com/drive/14X97mTwAU9kxS__xaehDU7SQbe_2ivP0?usp=sharing</a></li> <li>Line detector: a single Unet++ model with backbone efficientnet b5. Training code: <a href="https://drive.google.com/file/d/1OFoogWWWP3vHAHe9g0UsY0cCfiHKOeMp/view?usp=sharing" target="_blank">https://drive.google.com/file/d/1OFoogWWWP3vHAHe9g0UsY0cCfiHKOeMp/view?usp=sharing</a><br> I annotated some images and train these models.</li> </ul> <h3>d. Data association:</h3> <ul> <li>Some rule-based logic to associate the chart data (bar, scatter, dot) to its corresponding ticks. </li> <li>Project the pixel coordinates of ticks and chart data onto the axes-relative scale.</li> </ul> <p>Some thing not having time to try yet: External dataset, a wider range of instance segmentation / object detection models, Generative models</p>
Parkinson's Freezing of Gait Prediction
2nd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Parkinson's Freezing of Gait Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2>2nd place solution</h2> <p>First of all, thanks to the host for an interesting competition and congratulations to all the winners!</p> <h2>Summary</h2> <ul> <li>We designed separate models for tDCS FOG and DeFog.</li> <li>Both tDCS FOG and DeFog were trained using GRUs.</li> <li>Each Id was split into sequences of a specified length. During training, we used a shorter length (e.g., 1000 for tdcsfog, 5000 for defog), but for inference, a longer length (e.g., 5000 for tdcsfog, 30000 for defog) was applied. This emerged as the most crucial factor in this competition.</li> <li>For the DeFog model, we utilized the ‘notype’ data for training with pseudo-labels, significantly improving the scores; by leveraging the Event column, robust pseudo-labels could be created.</li> <li>Although CV and the public score were generally correlated, they became uncorrelated as the public score increased. Additionally, the CV was quite variable and occasionally surprisingly high. Therefore, we employed the public score for model selection, while CV guided the sequence length.</li> </ul> <h2>tDCS FOG</h2> <ul> <li><p>Features</p> <ul> <li>Three provided accelerations</li> <li>Feature engineering performed on each acceleration involved:<ul> <li>Difference between prior and subsequent accelerations</li> <li>Cumulative sum of acceleration</li></ul></li></ul></li> <li><p>Standardization</p> <ul> <li>Used RobustScaler</li> <li>Each Id was standardized individually</li></ul></li> <li><p>Sequence Creation Method:</p> <ul> <li><p>Training</p> <ul> <li>Sequence length: 1000<ul> <li>Sequences were created by shifting 500 steps from the starting position.</li></ul></li></ul></li> <li><p>Inference</p> <ul> <li>Sequence length: 3000 or 5000 (3000 if data size was less than 5000, 5000 otherwise)</li> <li>Sequences were created by shifting 1500 or 2500 steps from the starting position.<ul> <li>During prediction, the sequence section from 750/1250 to 2250/3750 was utilized.</li> <li>The initial segment spanned from 0 to 2250/3750, while the final segment used from 750/1250 to the end of the sequence. </li></ul></li></ul></li></ul></li> <li><p>Models</p> <ul> <li><p>For each target, we ensembled four models.</p></li> <li><p>The following settings were common to each model</p> <ul> <li>Model : GRU</li> <li>Cross validation method : StratifiedGroupKFold<ul> <li>group : Subject</li></ul></li> <li>Loss : BCEWithLogitsLoss</li> <li>Optimizer : AdamW</li> <li>Scheduler : get_linear_schedule_with_warmup<ul> <li>(Although not verified in detail) get_linear_schedule_with_warmup seemd to work better in CV than get_cosine_schedule_with_warmup.</li></ul></li> <li>Sequence length<ul> <li>Train : 1000</li> <li>Inference : 3000 / 5000</li> <li>Training the model with a longer sequence did not improve CV or public score. However, training with a short sequence and performing inference with a long sequence significantly improved both CV and public score. </li></ul></li></ul></li> <li><p>Model1</p> <ul> <li>This model trained with equal loss for each target.</li> <li>CV<ul> <li>Sequence length 3000 / 5000 : 0.493 <br><br> (Sequence length 1000 : 0.438)</li></ul></li> <li>Ensemble weight : 0.2</li></ul></li> <li><p>Model2</p> <ul> <li>The loss weight of one target was set to 0.6, and the remaining targets were set at 0.4.<ul> <li>The following three patterns<ul> <li>StartHesitation : 0.6 , Turn &amp; Walking : 0.4</li> <li>Turn : 0.6 , StartHesitation &amp; Walking : 0.4</li> <li>Walking : 0.6 , StartHesitation &amp; Turn : 0.4</li></ul></li></ul></li> <li>The model was saved at the epoch where the target with the weight set to 0.6 had the best score.</li> <li>Only the predictions with the loss weight set to 0.6 were used in the test predictions.</li> <li>CV<ul> <li>Sequence length 3000 / 5000 : 0.520</li></ul></li> <li>Ensemble weight : 0.4</li></ul></li> <li><p>Model3 &amp; 4</p> <ul> <li>The loss weight for two targets was set to 0.8, and the remaining target was set at 0.2.<ul> <li>The following three patterns<ul> <li>StartHesitation &amp; Turn : 0.8 , Walking : 0.2</li> <li>StartHesitation &amp; Walking : 0.8 , Turn : 0.2</li> <li>Turn &amp; Walking : 0.8 , StartHesitation : 0.2</li></ul></li></ul></li> <li>The model was saved at the epoch where the two targets with the weight set to 0.8 had the best score.</li> <li>Only the predictions with a loss weight set to 0.8 were used in the test predictions.</li> <li>CV <ul> <li>Sequence length 3000 / 5000 : 0.536</li></ul></li> <li>Ensemble weight : 0.4</li></ul></li> <li><p>ensemble</p> <ul> <li>CV<ul> <li>Sequence length 3000 / 5000 : 0.537</li></ul></li></ul></li></ul></li> </ul> <h2>DeFog</h2> <ul> <li><p>Features</p> <ul> <li>Three provided accelerations</li> <li>Feature engineering performed for each acceleration included<ul> <li>Difference between prior and subsequent accelerations</li></ul></li></ul></li> <li><p>Standardization</p> <ul> <li>Used StandardScaler</li> <li>Each Id was standardized individually</li></ul></li> <li><p>Sequence Creation Method:</p> <ul> <li><p>Training</p> <ul> <li>Sequence length: 5000<ul> <li>Sequences were created by shifting 2500 steps from the starting position.</li></ul></li></ul></li> <li><p>Inference</p> <ul> <li>Sequence length: 15000 or 30000 (15000 if data size was less than 200000, 30000 otherwise)</li> <li>Sequences were created by shifting 7500 or 15000 steps from the starting position.<ul> <li>During prediction, the sequence section from 3750/7500 to 11250/22500 was utilized.</li> <li>The initial segment spanned from 0 to 11250/22500, while the final segment used from 3750/7500 to the end of the sequence.</li></ul></li></ul></li></ul></li> <li><p>Models</p> <ul> <li><p>We ensembled five models </p></li> <li><p>The following settings are common to each model</p> <ul> <li>Model : GRU</li> <li>Cross validation method : StratifiedGroupKFold<ul> <li>group : Subject</li></ul></li> <li>Optimizer : AdamW</li> <li>Loss : BCEWithLogitsLoss</li> <li>Scheduler : get_linear_schedule_with_warmup</li> <li>Sequence length<ul> <li>Train : 5000</li> <li>Inference : 15000 / 30000</li></ul></li> <li>The loss weights for each target were uniform</li> <li>Only instances where both 'Valid' and 'Task' were true were considered for loss calculation.</li></ul></li> <li><p>model1</p> <ul> <li>CV<ul> <li>Sequence length 15000 / 30000 : 0.279</li></ul></li> <li>Ensemble weight : 0.35</li></ul></li> <li><p>model2</p> <ul> <li>Utilized the first round of pseudo-labeling<ul> <li>Applied hard labels, with the label set to 1 only if the data value of the 'Event' was 1, otherwise it was set to 0</li> <li>The label was determined based on the highest predictive value among the three target predictions</li> <li>Inference results from sequences of length 15000 from model1 were used</li></ul></li> <li>The application of pseudo-labeling significantly improved both public and private scores</li> <li>CV<ul> <li>Sequence length 15000 / 30000 : 0.306</li></ul></li> <li>Ensemble weight : 0.25</li></ul></li> <li><p>model3</p> <ul> <li>Utilized the second round of pseudo-labeling</li> <li>CV<ul> <li>Sequence length 15000 &amp; 30000 : 0.313</li></ul></li> <li>Ensemble weight : 0.25</li></ul></li> <li><p>model4</p> <ul> <li>Increased the hidden size of the GRU</li> <li>Utilized the first round of pseudo-labeling</li> <li>CV<ul> <li>Sequence length 15000 &amp; 30000 : 0.3393</li></ul></li> <li>Ensemble weight : 0.10</li></ul></li> <li><p>model5</p> <ul> <li>Trained with all data</li> <li>Utilized pseudo-labeling</li> <li>Ensemble weight : 0.05</li></ul></li> <li><p>ensemble(excluding model5)</p> <ul> <li>Sequence length 15000 &amp; 30000 : 0.33706</li></ul></li></ul></li> </ul> <h2>tDCS FOG &amp; DeFog</h2> <ul> <li>CV : 0.548</li> <li>Public Score : 0.530</li> <li>Private Score : 0.450</li> </ul> <h2>Inference notebook</h2> <p><a href="https://www.kaggle.com/code/takoihiraokazu/cv-ensemble-sub-0607-1" target="_blank">https://www.kaggle.com/code/takoihiraokazu/cv-ensemble-sub-0607-1</a></p> <h2>Code</h2> <p><a href="https://github.com/TakoiHirokazu/Kaggle-Parkinsons-Freezing-of-Gait-Prediction" target="_blank">https://github.com/TakoiHirokazu/Kaggle-Parkinsons-Freezing-of-Gait-Prediction</a></p>
BirdCLEF 2023
37th place solution - TF CNN + BirdNet emb. cls. & XGB
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2023 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congrats to the winners and also to the host that create these annual competitions. 👌</p> <p><strong>Summary of the solution</strong></p> <p>Inference with 12 models all together within the 2 hours CPU inference window.</p> <p>5 different Tensorflow CNN, SOTA techniques trained with previous competition data and finetuned with current data, post ONNX converted. Ensembled together with: 1. The latest BirdNet TFlite version with 124 of the 264 birds. 2. custom classification TF head trained with Birdnet embeddings on competition data. 3. XGB 5 fold trained with Birdnet embeddings on competition data. Also used the non-bird classes from Birdnet to reduce the probabilities with 50% over the birds if the non-birds classes had higher probability in 5 seconds window, nothing of use here but interesting to implement.</p> <p><strong>Training and data</strong></p> <p>As starter code I used <a href="https://www.kaggle.com/awsaf49" target="_blank">@awsaf49</a> <a href="https://www.kaggle.com/code/awsaf49/birdclef23-pretraining-is-all-you-need-train" target="_blank">https://www.kaggle.com/code/awsaf49/birdclef23-pretraining-is-all-you-need-train</a>. Tensorflow based TPU training with W&amp;B logging and latest SOTA audio techniques, SpecAug like Time Freq masking and Audio Augmentation like Gaussian Noise, Random CropPad, CutMix and MixUp.</p> <blockquote> <p>Methodology 🎯<br> In this notebook, we will explore how to identify bird calls using TensorFlow. Specifically, this notebook will cover:<br> How to use tf.data for audio processing tasks and reading .ogg files in TensorFlow<br> How to extract spectrogram features from raw audio on TPU/GPU, which reduces CPU bottleneck significantly, speeding up the process by ~ 4×on P100 GPU compared to the previous notebook.<br> Unlike the previous tutorial, this notebook will perform spectrogram augmentation such as TimeFreqMask and Normalization on GPU/TPU and perform CutMix and MixUp with audio data on CPU.<br> This notebook demonstrates how pre-training on the BirdCLEF - 2020, 2021, 2022 &amp; Xeno-Canto Extend dataset can improve transfer learning performance. CNN backbones, like EfficientNet, struggle with spectrogram data even with ImageNet pre-trained weights as they are not fimilar with audio data. Pre-training on an audio dataset, like BirdCLEF, can mitigate this issue and can yield a ~ 5% improvement in local validation and ~ 2% improvement in leaderboard. This notebook is compatible with both GPU, TPU, and the newly launched TPU-VM device is automatically selected, so you won't have to do anything to allocate the device.</p> </blockquote> <p>Credit to <a href="https://www.kaggle.com/awsaf49" target="_blank">@awsaf49</a> </p> <p><strong>Inference and submissions</strong></p> <p>TF CNN models trained and used: <br> Efficientnetb1, Eficientnetb0, Efficientnetv1b1ns, Efficientnetv2m, Efficientnetv2s – some trained with all the data some in single fold, many also trained/saved with weighted average over 2-5 checkpoints-weights, during the pretraining phase for better generalization. All models converted to ONNX format in test mode. All models picked based on speed and performance, e.g. could it be onnx converted and in what cpu inference speed while in onnx format. <br> BirdNet:<br> Used the code and models from <a href="https://github.com/kahst/BirdNET-Analyzer" target="_blank">https://github.com/kahst/BirdNET-Analyzer</a> and used the latest V.2.3 tflite model for best test time speed. I could find 124 of the 264 competition birds in model and also some other interesting classes within the 3K BirdNet classes.<br> I used the BirdNet to create and use 4 models to the CNN ensemble.</p> <ol> <li>Classify and extract the 124 birds and also ~100 non-bird classes for further ensemble and testing.</li> <li>Used the same model’s embeddings to classify on a custom cls-head which I had pretrained on all the classes. Describe in the BirdNet code but I used another deeper architecture.</li> <li>Used the same model’s embeddings to classify on 5-fold XGB models which I had pretrained on all the classes with multi:softprob and aucpr metric.</li> <li>I concatenated the ensembled classes together with the non-bird classes and extracted which 5 second windows that had highest probabilities within the non-bird classes and reduced the probability by 50% in the 264 bird classes as it was either noisy or other sounds that might would reduce the like hood of a birdcall. Now, this was of no use in this metric but still interesting testing for other purpose and metrics 😊</li> </ol> <p>Then all CNNs and BirdNet variants where power averaged and weighted ensembled.<br> Below is a progress for the competition metric with the different models/solutions. Not the same as the picked submissions but only another ensemble weight and also the best scored, so tested on that post deadline.</p> <p>Private – Public - Models</p> <p>.71621 – .81447 – Only CNNs<br> .72391 – .81885 – With CNNs + BirdNet classifier and extract the 124 birds.<br> .72250 – .81873 – With CNNs + BirdNet embeddings to classify on a custom cls-head which I had pretrained on all the classes.<br> .72459 – .81996 – With CNNs + BirdNet embeddings to classify on 5-fold XGB which I had pretrained on all the classes.<br> .73132 – .82337 – All above weighted ensembled.</p> <p><strong>That’s it!</strong></p> <hr>
Benetech - Making Graphs Accessible
14th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Benetech - Making Graphs Accessible <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First, congratulations to the winning teams! This task was very interesting and challenging. I hope to learn a lot from the top solutions.</p> <p>My solution consists of chart_type classification, template matching for scatter and pix2struct model for the rest chart_types.</p> <h1>Chart_type Classification</h1> <p>I finetuned resnet18d with about 70,000 images in the ICDAR dataset, resulting in the accuracy of 0.995 for extracted data.</p> <h1>Template matching for scatter</h1> <p>Scatter often requires a large number of datapoints, and long token lengths are needed. I addressed this problem by object detection, OCR and template matching. This approach scores 0.51 on extracted data. The algorithm is as follows:</p> <ol> <li>object detection of data points and axis texts by YOLOX (im_size=480)</li> <li>generate a pattern of datapoints using the highest confidence among the detected points</li> <li>identify the shape of the pattern based on the distance between the RGB values of the pattern and the background color</li> <li>scan the entire image and calculate the similarity score (L2 distance) with the pattern</li> <li>select approximately 1000 pixels with the highest similarity score as candidate points</li> <li>fill the rectangular area containing the data points with background color and use it as a base for reconstruction</li> <li>take out the candidate points in order from the one with the highest similarity score, and place a pattern if the following two conditions are satisfied<ol> <li>the L2 error with the original image is smaller when a pattern is placed than when a pattern is not placed</li> <li>the overlap with the existing pattern does not exceed 30%.</li></ol></li> </ol> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3962786%2F7bc7f2767887a28aa23dfff79aa62965%2Fscatter.png?generation=1687226523383966&amp;alt=media"></p> <p>Since simple linear regression is vulnerable to outliers, I tried to use some form of robust regression. Considering that methods such as RANSAC and huber require a parameter on the scale and that the parameters tend to overfit the data set due to small data size, I implemented a simple robust regression without any hyper parameters. All two point pairs are connected by a straight line and the slope and intercept are calculated. The desired regression line is obtained using the median value for all computed slopes and intercepts. </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3962786%2Fa42b3944e43c17ada9c800e7242411c4%2Freg.png?generation=1687226587258677&amp;alt=media"></p> <h2>Pix2Struct model for other types</h2> <ul> <li>labels are in the form xyxy</li> <li>decimal part of numerical is rounded to 4 significant digits</li> <li>augmentation for translation, rotation, noise, hue, etc.</li> <li>pretrained model: MatCha-base</li> <li>hyperparameters<ul> <li>max_length: 512</li> <li>max_patches: 2048</li> <li>lr: 1e-5</li> <li>scheduler: get_cosine_schedule_with_warmup</li> <li>num_warmup_steps: 1000</li> <li>weight_decay: 1e-5</li></ul></li> <li>Training data<ul> <li>extracted (1118)</li> <li>generated (999)</li> <li>ICDAR (labeled 1286+ pseudo&amp;hand labeled 2988)</li></ul></li> </ul>
AMP®-Parkinson's Disease Progression Prediction
Silver Medal Approach - AIRA MATRIX
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: AMP®-Parkinson's Disease Progression Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>AMP® Parkinson's Disease Progression Prediction</h1> <h2>Silver Medal Approach (Rank 57 Top 3%)</h2> <p>Authors: <a href="https://www.kaggle.com/omkarnanekarairam" target="_blank">Omkar Nanekar</a> and <a href="https://www.kaggle.com/adityavartak" target="_blank">Aditya Vartak</a></p> <h1>Deconstructing the problem statement</h1> <h2>Motivation</h2> <p>Parkinson's disease (PD) is a degenerative brain disorder that significantly impairs movement, cognition, sleep, and various other normal functions. Unfortunately, there is currently no known cure for PD, and the condition progressively worsens over time. It is projected that by 2037, approximately 1.6 million people in the United States will be affected by Parkinson's disease, resulting in an economic burden nearing $80 billion. <br> Extensive research suggests that abnormalities in proteins or peptides play a crucial role in the onset and progression of this disease. By leveraging the power of data science, gaining a deeper understanding of these abnormalities could provide vital insights for the development of new pharmacotherapies aimed at slowing down disease progression or finding a cure for Parkinson's. Significant efforts have been made to collect complex clinical and neurobiological data from over 10,000 subjects, with the intention of sharing this valuable information with the research community. While important discoveries have emerged from this data, the identification of definitive biomarkers or curative approaches remains elusive. Therefore, investigating potential causes of Parkinson's disease is essential in the ongoing quest for a cure, as it would greatly alleviate the immense suffering endured by patients and help reduce the substantial medical care costs associated with this condition.</p> <h2>Problem Statement</h2> <p>The objective of this competition is to create a predictive model for MDS-UPDRS scores, which are used to measure the progression of Parkinson's disease in patients. The MDS-UPDRS is a comprehensive evaluation tool that assesses both motor and non-motor symptoms associated with Parkinson's. In this competition, you will develop a model that utilizes data on protein and peptide levels collected over time from subjects with Parkinson's disease and age-matched control subjects without the disease. The outcome of your work has the potential to offer valuable insights into the specific molecules that undergo changes as Parkinson's disease advances, thereby contributing to important breakthroughs in our understanding of the disease.</p> <table> <thead> <tr> <th>MDS-UPDRS-Level</th> <th>Cognitive impairment</th> <th>Hallucinations</th> <th>Depressed mood</th> <th>Anxious</th> <th>Apathy</th> <th>Dopamine Dysregulation</th> </tr> </thead> <tbody> <tr> <td>Level 1</td> <td>Slight</td> <td>Rare</td> <td>Short spurts</td> <td>Short spurts</td> <td>Slight</td> <td>Rare</td> </tr> <tr> <td>Level 2</td> <td>Minimal</td> <td>Frequent</td> <td>Multiple Days</td> <td>Multiple Days</td> <td>Some activities</td> <td>Minor</td> </tr> <tr> <td>Level 3</td> <td>Interfering with daily activities</td> <td>loss of Insight</td> <td>Interfering with daily activities</td> <td>Interfering with daily activities</td> <td>Most activities</td> <td>Major</td> </tr> <tr> <td>Level 4</td> <td>Severe</td> <td>Delusion</td> <td>Severe</td> <td>Severe</td> <td>No initiative</td> <td>Severe</td> </tr> </tbody> </table> <pre><code> 1: -UPDRS and its characteristics </code></pre> <h2>Competition Evaluation</h2> <p>It is a Notebooks-only submission competition. i.e submissions have to be in the form of a runnable kaggle notebook.Submissions to this competition required using the provided python time-series API, which ensures that models do not peek forward in time.</p> <p>The metric used is SMAPE (Symmetric mean absolute percentage error). The formula for the same is as follows:<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F14390726%2Fe96fb55fc713956263611ce3bdef1a6d%2FSMAPE.png?generation=1686726363879996&amp;alt=media" alt=""><br> During the competition, the metric was changed to SMAPE+1 because if actual value is zero and predicted value is &gt;1 , then SMAPE goes to 200% , which leads to unstable evaluations. Hence the competition hosts changed the metric to SMAPE + 1 to mitigate the impact of cases where the true UPDRS score is zero. The metric is the same as SMAPE except that we add 1 to both actual value and predicted value before calculating SMAPE.</p> <h2>Data</h2> <p>The dataset comprises protein abundance data obtained from mass spectrometry readings of cerebrospinal fluid (CSF) samples collected from Parkinson's disease (PD) patients. It includes information at both the peptide and protein level, with associated visit and patient identifiers. Clinical data such as UPDRS scores, representing PD severity, are also provided. The dataset allows for time-series analysis as patients contributed samples over multiple visits. The goal of the competition is to leverage this data to predict the progression of PD using protein abundance information. Whereas the test data set was kept hidden, so it wasn't possible to determine the counts of patients or the samples in test data. However the public leaderboard scores were based on 13% of data and remaining 87% data were used for the private and final leaderboard.</p> <table> <thead> <tr> <th>visit_id</th> <th>patient_id</th> <th>visit_month</th> <th>updrs_1</th> <th>updrs_2</th> <th>updrs_3</th> <th>updrs_4</th> <th>upd23b_clinical_state_on_medication</th> </tr> </thead> <tbody> <tr> <td>55_0</td> <td>55</td> <td>0</td> <td>10</td> <td>6</td> <td>15</td> <td>NaN</td> <td>NaN</td> </tr> <tr> <td>55_3</td> <td>55</td> <td>3</td> <td>10</td> <td>7</td> <td>25</td> <td>NaN</td> <td>NaN</td> </tr> <tr> <td>55_6</td> <td>55</td> <td>6</td> <td>8</td> <td>10</td> <td>34</td> <td>NaN</td> <td>NaN</td> </tr> <tr> <td>55_9</td> <td>55</td> <td>9</td> <td>8</td> <td>9</td> <td>30</td> <td>0</td> <td>On</td> </tr> <tr> <td>55_12</td> <td>55</td> <td>12</td> <td>10</td> <td>10</td> <td>41</td> <td>0</td> <td>On</td> </tr> </tbody> </table> <pre><code> . </code></pre> <table> <thead> <tr> <th>visit_id</th> <th>visit_month</th> <th>patient_id</th> <th>UniProt</th> <th>NPX</th> </tr> </thead> <tbody> <tr> <td>55_0</td> <td>0</td> <td>55</td> <td>O00391</td> <td>11254.3</td> </tr> <tr> <td>55_0</td> <td>0</td> <td>55</td> <td>O00533</td> <td>732430</td> </tr> <tr> <td>55_0</td> <td>0</td> <td>55</td> <td>O00584</td> <td>39585.8</td> </tr> <tr> <td>55_0</td> <td>0</td> <td>55</td> <td>O14498</td> <td>41526.9</td> </tr> <tr> <td>55_0</td> <td>0</td> <td>55</td> <td>O14773</td> <td>31238</td> </tr> </tbody> </table> <pre><code> . </code></pre> <table> <thead> <tr> <th>visit_id</th> <th>visit_month</th> <th>patient_id</th> <th>UniProt</th> <th>Peptide</th> <th>PeptideAbundance</th> </tr> </thead> <tbody> <tr> <td>55_0</td> <td>0</td> <td>55</td> <td>O00391</td> <td>NEQEQPLGQWHLS</td> <td>11254.3</td> </tr> <tr> <td>55_0</td> <td>0</td> <td>55</td> <td>O00533</td> <td>GNPEPTFSWTK</td> <td>102060</td> </tr> <tr> <td>55_0</td> <td>0</td> <td>55</td> <td>O00533</td> <td>IEIPSSVQQVPTIIK</td> <td>174185</td> </tr> <tr> <td>55_0</td> <td>0</td> <td>55</td> <td>O00533</td> <td>KPQSAVYSTGSNGILLC(UniMod_4)EAEGEPQPTIK</td> <td>27278.9</td> </tr> <tr> <td>55_0</td> <td>0</td> <td>55</td> <td>O00533</td> <td>SMEQNGPGLEYR</td> <td>30838.7</td> </tr> </tbody> </table> <pre><code> . </code></pre> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F14390726%2F87283515bf56fa4b8903d9b42a84e1a2%2FAMP%20PPDP%20Appraoch(1).drawio.png?generation=1686735659764230&amp;alt=media"></p> <pre><code> . Our Approach </code></pre> <h1>Overview of Final Approach</h1> <h2>Training</h2> <p>Fig. 1 Depicts the flow we followed in our best approach for the competition. To improve our performance in the competition, we adopted the following strategy:</p> <ol> <li>Exploratory Experiments: We conducted a series of initial experiments using various pre-trained approaches. This allowed us to quickly identify the most effective architectures for our task.</li> <li>Advanced Techniques: Once we identified the promising architectures, we applied advanced techniques such as fine-tuning, changing the cross validation steps, proteins and peptide information. This iterative process helped us optimize the models further.</li> <li>Change in target: As the problem was to predict the 4 UPDRS scores(i.e updrs_1,updrs_2,updrs_3 and updrs_4) based on the time series of patients. To do this we adapted the architecture where we focus on finding the trend of the target among the time period. In summary, we calculated trends for different targets based on a dataset by finding the optimal trend values that minimize the difference between the true values and the predictions. The results are stored in a dictionary for each target to use later for predictions on test data.</li> <li>Protein Information Extraction: From the Pearson correlation among all 4 updrs scores and provided 229 proteins. Then took the average among all 4 updrs scores and picked up the top one to proceed further. We then used this protein with the trend found for the updrs score to find out the shift in protein with time. To do this we calculated the optimal protein shift for different quantile ranges and targets. We did this by filtering the dataset based on the quantile ranges like {'quantile_low': 0.0, 'quantile_high': 0.03}, {'quantile_low': 0.06, 'quantile_high': 0.98}, {'quantile_low': 0.94, 'quantile_high': 1.0}, for finding the best protein shift using an optimization algorithm, and storing the results in a dictionary.</li> <li>Model Ensemble: After finding the best shifts for protein and trend value for updrs scores we used this information for prediction for test score. For prediction of the score, we rounded the scores to the nearest whole number. </li> </ol> <p>In summary, we used the provided API to iterate over the test data and combine protein features with other relevant information. Then fill the NA values by the forward fill method. Then, calculated ratings for different targets based on prediction months and trend values. The ratings then are adjusted based on protein values within specific quantile ranges. By employing this approach, we achieved a significant boost in our predictions and obtained the final prediction for the competition.</p> <h2>Cross-Validation</h2> <p>As test data doesn’t contain any patient data from the training set, to perform the cross-validation we cannot use time-series based approaches, instead we used the cross validation with GroupKFold level on patient_id. We did the above in order to avoid the patients in our train data to be a part of our test data, thus avoiding data leakage</p> <h1>Expertise and skills</h1> <h2>Applied Techniques</h2> <ol> <li>Clip Function: The calculate_month_trend_predicitons function includes a clip operation (pred_month.clip(54, None)) for the target variable 'updrs_4'. This ensures that the predicted month value does not go below 54.</li> <li>Trend Calculation: The calculate_month_trend_predicitons function calculates the trend prediction for a given month by multiplying the predicted month value with the corresponding trend factors obtained from the target_to_trend dictionary.</li> <li>Rounding: The calculate_predicitons_protein function rounds the trend prediction plus the protein shift value to obtain the final prediction.</li> <li>Optimization: The find_best_const function performs optimization using the minimize function from the SciPy library. It aims to find the best constant shift value that minimizes the metric (smape_plus_1) between the true values (y_true_array) and the predictions (y_pred) obtained by calling the calculate_predicitons_protein function with varying protein shift values.</li> <li>Quantile Filtering: The code applies quantile filtering to the train_clinical_all dataset for each target variable and quantile group defined in the npx_groups list. It selects a subset of data based on the quantile range specified for a particular feature. The filtered dataset is stored in train_clinical_all_filtered.</li> <li>Constant Shift Optimization: The find_best_const function is called within a loop for each target variable and quantile group. It uses the filtered dataset (train_clinical_all_filtered) to find the best constant shift value for that specific target and quantile group. The result is stored in the shift field of each npx_group item, and the items are appended to the target_to_npx_groups_shift dictionary.</li> </ol> <p>Overall, the code combines trend prediction, rounding, optimization, and quantile filtering techniques to calculate predictions and determine the best constant shift values for different target variables and quantile groups.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F14390726%2F11dcc14b4903c859aed9bc5322794ff0%2FmodelTrend%20(1).png?generation=1686732948157020&amp;alt=media"></p> <pre><code> Fig . Predictions Trend-optimization </code></pre> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F14390726%2F79b454794bfe9117981eca3f2d605c97%2Fnewplot.png?generation=1686732482532612&amp;alt=media"></p> <pre><code> .: Protein-P5060 NPX shift for UPDRS- </code></pre> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F14390726%2F2fd66b7730b13e44790bf446e771593d%2Fnewplot%20(1).png?generation=1686732570962317&amp;alt=media"></p> <pre><code> .: Protein-P5060 NPX shift for UPDRS- </code></pre> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F14390726%2F9c725618a1e5e5a21380d48557005a37%2Fnewplot%20(2).png?generation=1686732574283117&amp;alt=media"></p> <pre><code> .: Protein-P5060 NPX shift for UPDRS- </code></pre> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F14390726%2F665f4756319abbfd04fdc076f76c5809%2Fnewplot%20(3).png?generation=1686732577285796&amp;alt=media"></p> <pre><code> .: Protein-P5060 NPX shift for UPDRS- </code></pre> <h2>False paths, what didn’t work</h2> <ol> <li>Merging the clinical with proteins and peptide information and then adding supplementary data to it. As for supplementary data the proteins and peptide information wasn’t available so we imputed 0 for the same.</li> <li>We also tried finding the most expressed protein for each of the protein supergroups of O, P and Q , then used this protein in conjunction with visit features. However, this method didn't turn out to be very effective.</li> <li>Doing the PCA and taking only the first component with visit_id,patient_id and visit_month for training.</li> <li>Did cross validation with GroupKFold on patient level for splitting the data in train test and fine tuning the Random Forest model with the help of Optuna to find the best parameters and storing the fitted model for testing.</li> <li>The above approach used only for updrs_1, updrs_2 and updrs_3 score prediction for updrs_4 we used trend analysis to find the best trend pattern for updrs_4. Because updrs_4 was having more than 50% of null values which makes it difficult to train models for the same. </li> <li>With this combination of trend and model approach we got an SMAPE score of 69.8. <a href="url" target="_blank">https://www.kaggle.com/code/omkarnanekarairam/pca-with-optuna-model-public-lb-score-56-7#6.-Inference</a></li> </ol> <p>However the final approach score was 69.7 which led us to get a silver medal. <a href="url" target="_blank">https://www.kaggle.com/code/omkarnanekarairam/protein-for-trend-silver-medal-approach?scriptVersionId=130051459</a></p> <h1>Areas to build skills in</h1> <p>After analyzing the competition forums and studying the approaches adopted by previous winners, several potential areas have emerged as promising targets for skill development.<br> We should have done a more comprehensive EDA for the data, in order to reach the conclusion that the top rankers in the competition have , The predictive power of Proteins and peptides is as useful as random numbers<br> Relying on just the Pure ML and Deep Learning approaches didn’t work for this competition. Hence, Flexibility of approaches to simpler and unorthodox methods can be needed at times<br> We could have utilized ANNs for modeling to capture relationships that were not being captured. </p> <h1>Conclusion:</h1> <p>Participating in this competition was a huge learning experience. The eventual result amid multiple challenges we faced, validates our team’s ability to work successfully with small, industry-specific structured dataset in Healthcare. There is still huge scope for learning and improvement in working with real-world datasets like these, hopefully helping us solve the difficult problems faced in Healthcare.</p> <h1>Citation:</h1> <p>"In this assignment, We utilized a code snippet that was obtained from the same kaggle competition which we found as the best architecture for the competition. The code, which is responsible for Finding the trend for the updrs score and finding the best shift for proteins, was written by Vitaly Kudelya and posted on Kaggle under the username vitalykudelya. The code was accessed on 5th May and can be found at <a href="url" target="_blank"></a><a href="https://www.kaggle.com/code/vitalykudelya/protein-npx-groups-trend" target="_blank">https://www.kaggle.com/code/vitalykudelya/protein-npx-groups-trend</a> ."</p>
Benetech - Making Graphs Accessible
28th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Benetech - Making Graphs Accessible <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, we would like to thank Kaggle &amp; Benetech for hosting this very interesting competition. It was great fun to participate, and we learned a tremendous amount from it. 😀</p> <p>We used a combination of image classification, object detection, Donut, PaddleOCR, TrOCR, and lots of postprocessing! 😅</p> <hr> <h1>Models</h1> <p>We trained models for the following tasks:</p> <ol> <li>Plot type classification - ResNet34</li> <li>Rotated vs non-rotated X tick label classification - ResNet34</li> <li>Horizontal bar detection - FasterRCNN with ResNet50-FPN backbone</li> <li>Vertical bar detection (used in the histogram pipeline as well) - FasterRCNN with ResNet50-FPN backbone</li> <li>Line point detection - FasterRCNN with ResNet50-FPN backbone</li> <li>Scatter point &amp; dot detection (a single model for both) - FasterRCNN with ResNet50-FPN backbone</li> <li>Tick mark detection - FasterRCNN with ResNet50-FPN backbone</li> <li>X tick label detection (used only for detecting non-rotated X tick labels) - FasterRCNN with ResNet50-FPN backbone</li> <li>Y tick label detection - FasterRCNN with ResNet50-FPN backbone</li> <li>Donut (used only for recognizing rotated X tick labels) - <a href="https://huggingface.co/naver-clova-ix/donut-base" target="_blank"><code>'naver-clova-ix/donut-base'</code></a></li> </ol> <p>In addition, we used the following models as-is (i.e., without any fine-tuning):</p> <ol> <li>PaddleOCR for multi-line text detection &amp; conversion to single-line texts - <a href="https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.6/doc/doc_en/PP-OCRv3_introduction_en.md" target="_blank">PP-OCRv3</a></li> <li>TrOCR for OCR of single-line texts - <a href="https://huggingface.co/microsoft/trocr-base-printed" target="_blank"><code>'microsoft/trocr-base-printed'</code></a></li> </ol> <p><strong>Note:</strong> We used PaddleOCR because TrOCR can only read single-line texts. For single-line texts, we found TrOCR to be very accurate.</p> <hr> <h1>Inference Examples</h1> <p>The following images are examples of inference using each of the above models:</p> <p><strong>Plot type classification:</strong></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1191272%2Faf30206384b2f090c6fcbd0cb59bf959%2Fplot_type_inference.png?generation=1687424929176720&amp;alt=media" alt=""></p> <p><strong>Rotated vs non-rotated X tick label classification:</strong></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1191272%2F037c35db3590c5c1b67f85ab57fb36e2%2Frotation_inference.png?generation=1687422645141712&amp;alt=media" alt=""></p> <p><strong>Horizontal bar detection:</strong></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1191272%2Fd4d7e6bee836a68203a8b0a5a08f17d3%2Fhorizontal_bar_inference.png?generation=1687422676773859&amp;alt=media"></p> <p><strong>Vertical bar detection (used in the histogram pipeline as well):</strong></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1191272%2F7746b8f7e88ee68b0d45729673d7a1ff%2Fvertical_bar_inference.png?generation=1687441005236360&amp;alt=media"></p> <p><strong>Line point detection:</strong></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1191272%2F0ad448751c154024bf8f9bef757e8cc0%2Fline_inference.png?generation=1687422824609612&amp;alt=media"></p> <p><strong>Scatter point &amp; dot detection (a single model for both):</strong></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1191272%2F652aa1e31c4486cc05a0de0fc2a4b275%2Fscatter_inference.png?generation=1687422924497010&amp;alt=media"></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1191272%2Ff8a45dce67f94bde15a6288b28ebef85%2Fdot_inference.png?generation=1687422948970505&amp;alt=media"></p> <p><strong>Tick mark detection:</strong></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1191272%2F33f16e5a073df164463c026a48d40360%2Ftick_mark_inference.png?generation=1687422992117363&amp;alt=media"></p> <p><strong>X tick label detection (used only for detecting non-rotated X tick labels):</strong></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1191272%2Fb793316f01d17ea577caa501d142591d%2Fx_tick_label_inference.png?generation=1687423056622492&amp;alt=media"></p> <p><strong>Y tick label detection:</strong></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1191272%2F2b458db805a977b2c36447bdb6c53a31%2Fy_tick_label_inference.png?generation=1687423081364918&amp;alt=media"></p> <p><strong>Donut (used only for recognizing rotated X tick labels):</strong></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1191272%2Ffbb5753a331e6cbb22fa19cc56a55a04%2Fdonut_inference.png?generation=1687429751410113&amp;alt=media" alt=""></p> <p><strong>Note:</strong> We trained our Donut model only on image patches below the X axis which contain rotated text.</p> <p><strong>PaddleOCR (for multi-line text detection &amp; conversion to single-line texts):</strong></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1191272%2Ff3647d71073644c6033c7116e1e33da2%2Fpaddle.png?generation=1687440095811819&amp;alt=media" alt=""></p> <p><strong>TrOCR (for OCR of single-line texts):</strong></p> <p>Image snippet:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1191272%2F49734d8ecacdb174d1bfaf431d15be99%2Ftrocr_snippet.jpg?generation=1687429867440216&amp;alt=media" alt=""></p> <p>Output: <code>"A. Total Genotoxic Potency"</code></p> <hr> <h1>Deep Learning Frameworks</h1> <p>We used:</p> <ul> <li>For image classification - <a href="https://docs.fast.ai/" target="_blank">fastai</a></li> <li>For object detection - <a href="https://airctic.github.io/icevision/0.12.0/" target="_blank">IceVision</a> + fastai</li> <li>For Donut - Hugging Face Transformers <a href="https://huggingface.co/docs/transformers/main_classes/trainer" target="_blank">Trainer</a> API</li> </ul> <hr> <h1>Data</h1> <p>In addition to the competition dataset, we used some additional images from the <em>ICPR 2022 CHART</em> dataset available at: <a href="https://chartinfo.github.io/toolsanddata.html" target="_blank">https://chartinfo.github.io/toolsanddata.html</a></p> <p>Bounding box annotations for the chart elements were added using either (i) custom Python scripts or (ii) manual annotation tools, viz. Make Sense (<a href="https://www.makesense.ai/" target="_blank">https://www.makesense.ai/</a>) and Roboflow. The annotations were created in the PASCAL VOC format.</p> <p>For line point detection, scatter point detection and tick mark detection, we had to convert the original point annotations to bounding box annotations. We noticed that the choice of bounding box size has a big impact on performance. Hence, we experimented with a few bounding box sizes, and found that the following sizes work well:</p> <table> <thead> <tr> <th>Line Point</th> <th>Scatter Point</th> <th>Tick Mark</th> </tr> </thead> <tbody> <tr> <td>18px</td> <td>16px</td> <td>16px</td> </tr> </tbody> </table> <p>For X tick label detection and Y tick label detection, we noticed that adding a 2px margin to the provided bounding boxes improves performance.</p> <p>Finally, we found that in a tiny fraction of images, one or more bounding boxes overstep the bounds of the image. We deleted these images from our dataset.</p> <hr> <h1>Model Validation Schemes</h1> <p><strong>Plot type classification:</strong></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1191272%2Fb0e8a053b3a9509a36b0a03b0935f248%2Fplot_type_val_scheme.png?generation=1689168901672895&amp;alt=media" alt=""></p> <p><strong>Rotated vs non-rotated X tick label classification:</strong></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1191272%2F7a1cdc221a5e10f875a76b9163888e21%2Frotation_val_scheme.png?generation=1689169392117974&amp;alt=media" alt=""></p> <p><strong>Horizontal bar detection:</strong></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1191272%2Fa866016d76730c38864c90c317fb89b7%2Fhorizontal_bar_val_scheme.png?generation=1689168936143640&amp;alt=media" alt=""></p> <p><strong>Vertical bar detection (used in the histogram pipeline as well):</strong></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1191272%2Fd27e058d0bb05846f5ea42eca69e101f%2Fvertical_bar_val_scheme.png?generation=1689168951683704&amp;alt=media" alt=""></p> <p><strong>Line point detection:</strong></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1191272%2Ffc25de18f4386ad8711b7953ddffc904%2Fline_val_scheme.png?generation=1689168967592482&amp;alt=media" alt=""></p> <p><strong>Scatter point &amp; dot detection (a single model for both):</strong></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1191272%2F2411ee14867dda5e3b7aba79d7d234cf%2Fscatter_dot_val_scheme.png?generation=1689170209501077&amp;alt=media" alt=""></p> <p><strong>Tick mark detection:</strong></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1191272%2Fc3cc591f20cadb2cb59c53fa40ba9b0b%2Ftick_mark_val_scheme.png?generation=1689170225648108&amp;alt=media" alt=""></p> <p><strong>X tick label detection (used only for detecting non-rotated X tick labels):</strong></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1191272%2F2c1ee44e72bbea46fb8e3e514ad042d2%2Fx_tick_label_val_scheme.png?generation=1689170239312207&amp;alt=media" alt=""></p> <p><strong>Y tick label detection:</strong></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1191272%2F04a9bed0423c7c186b6b4a3a24b1fc8a%2Fy_tick_label_val_scheme.png?generation=1689170253004364&amp;alt=media" alt=""></p> <p><strong>Donut (used only for recognizing rotated X tick labels):</strong></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1191272%2Fce3009c4f44c3d1126e268840097779b%2Fdonut_val_scheme.png?generation=1689170268258727&amp;alt=media" alt=""></p> <hr> <h1>Pipelines</h1> <h2>Image Classification</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1191272%2F65ff8a8d4344bd4c111c8a1d595b740a%2Fdiagram_image_classifier.png?generation=1687437455837261&amp;alt=media" alt=""></p> <h2>Horizontal Bar Pipeline</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1191272%2Ff1c8ac7ea8ce27d36a9c8f21ad5a3112%2Fdiagram_horizontal_bar.png?generation=1687695397682069&amp;alt=media" alt=""></p> <h2>Vertical Bar Pipeline</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1191272%2F4b4ea6617dd3907c0a044474cc475fef%2Fdiagram_vertical_bar.png?generation=1687695428772800&amp;alt=media" alt=""></p> <h2>Histogram Pipeline</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1191272%2F717f752077d0b61d3f277aba0ab66877%2Fdiagram_histogram.png?generation=1687696678498713&amp;alt=media" alt=""></p> <h2>Line Pipeline</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1191272%2F2e1debd34468def631db044e287cb0f5%2Fdiagram_line.png?generation=1687696700722044&amp;alt=media" alt=""></p> <h2>Scatter Pipeline</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1191272%2Fc9b7be151091d9a3cc030f8341d988bf%2Fdiagram_scatter.png?generation=1687518134085611&amp;alt=media" alt=""></p> <h2>Dot Pipeline</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1191272%2F8b79f1c390dff4b5c8a703767ee3a372%2Fdiagram_dot.png?generation=1687764390803909&amp;alt=media" alt=""></p> <hr> <h1>Results</h1> <table> <thead> <tr> <th></th> <th>Overall</th> <th>Scatter</th> <th>Dot</th> <th>Line</th> <th>Vertical Bar</th> <th>Horizontal Bar</th> </tr> </thead> <tbody> <tr> <td>public</td> <td>0.66</td> <td>0.06</td> <td>0.00</td> <td>0.26</td> <td>0.33</td> <td>0.01</td> </tr> <tr> <td>private</td> <td>0.49</td> <td>0.19</td> <td>0.01</td> <td>0.08</td> <td>0.20</td> <td>0.01</td> </tr> </tbody> </table> <hr>
IceCube - Neutrinos in Deep Ice
17th place solution (silver)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IceCube - Neutrinos in Deep Ice <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Many thanks to everyone, it was an exciting competition!</p> <p>I also want to give thanks to <a href="https://www.kaggle.com/rsmits" target="_blank">@rsmits</a> and <a href="https://www.kaggle.com/seungmoklee" target="_blank">@seungmoklee</a> for your notebooks with LSTM solutions to this problem. They really helped me to understand the possible approaches for this task. </p> <h3>Our solution is an ensemble of 2 types of models:</h3> <h4>1. Ensemble of 5 LSTM models, that are trained on classification problem to predict bins of angles</h4> <p>Tried different architectures. It appeared, that actually LSTM here worked better, than GRU. However, in one model we used 3 LSTM + 2 GRU layers and it also worked very good. We also used exponential LR, which gave slight boost. Cosine annealing LR didn't work good in our case. Models very trained on 200-300 files. I once tried 400-600 files, but training was much slower and the score increased also slowly. There was a deadline soon, so it was more "profitable" to use 200-300 files. We also tried to predict just a XYZ vector and the convert it to angle, tried MSE and VmF loss, didn't work well. It was ok, but not better, than predicting bins of angles.</p> <h4>2. DynEdge graphnets (5-7 in best submissions), just trained on VmF3d loss to predict XYZ vectors (as in the baseline)</h4> <p>Didn't really change anything, just trained this model on 200 files. Tried different lengths of pulses: from 96 to 200. More pulses gave better results. We managed to train about 20 epochs before the deadline, but it can definitely give better results with more training.</p> <p>While ensembling, we used different ensemble weights to predict azimuth and zenith. They were chosen on a validation set.</p> <p>Each graphnet takes around 30 minutes on inference. LSTM ensemble takes around 2 hours and has 0.999 on the private LB. We didn't check the score of pure graphnet ensemble on the LB, but the best single model is around 1.006. Mixing these 2 types of models gave us a nice boost (0.987 on the private).</p> <h3>Additional thoughts:</h3> <ul> <li>GraphNet is a cool thing, however, it felt like it can be pushed further in our case. It trains well, but after some epochs loss starts to increase slowly. Probably should tried different learning rate schedulers, but didn't have time at the end. </li> <li>We also tried a transformer model, trained on XYZ vector with MSE/VmF loss. Moreover, I believed, that in this competition it should win top places. However, for some reason it didn't overperform the LSTMs. Maybe I chose bad params in its architecture (was my first time, yeah). Maybe we needed some additional techniques and other losses to make it work.</li> <li>Ensembling gives nice boosts. I think that models are still not robust and in many cases they are not confident in predictions.</li> </ul>
AMP®-Parkinson's Disease Progression Prediction
4th Place Gold - Single Model RAPIDS cuML SVR!
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: AMP®-Parkinson's Disease Progression Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>AMP®-Parkinson's Disease Progression Prediction</h1> <p>Wow, this competition had a close finish at the top. I suspected this would be the case because I believe there was only 1 source of reliable signal (i.e. <code>patient visit dates</code>) and all the tops teams were working hard to extract an extra SMAPE 0.1 from it. The metric SMAPE is a percentage so <code>0.1</code> is actually <code>0.001</code>. That's how close the finish was!</p> <p>The final leaderboard is separated into two groups. The top 18 teams have <code>SMAPE &lt;= 62.5</code> and the next teams have <code>SMAPE &gt;= 68.4</code>. I believe the top 18 teams used signal from <code>patient visit dates</code> and the next teams <code>did not</code>.</p> <h1>Signal from Protein/Peptide data</h1> <p>In this competition, Kaggle provided 227 Protein NXP features and 968 Peptide PeptideAbundance features. That is 1195 features for each of 17 possible visit dates. We only have 248 train patients. And the <code>curse of dimensionality</code> begins when <code>number of features &gt; train samples / 10</code>. In other words we only have enough train data to reasonably train 25 features not 1195 features!</p> <p>I did an experiment where i made 1000 columns of random numbers. Using forward feature selection, I found that columns of <code>random numbers</code> would boost GroupKFold CV score <strong>the same amount that protein/peptide features did</strong>. This means that there may be signal hiding in the protein peptide data but it is too weak to detect patterns with only 248 patients (because no protein nor peptide boost CV more than random numbers can).</p> <h1>Signal from Patient Visit Dates</h1> <p>Next I searched patient visit dates for signal. Many Kagglers overlooked that we can engineer features from patient visit dates. Here are some example features</p> <ul> <li>when was patient's first blood work measured?</li> <li>did patient get blood work at their first doctor visit, yes or no?</li> <li>how many times did a patient visit the doctor?</li> <li>how long ago was the patient's last visit?</li> </ul> <p>Etc etc. We can create 100s of features about when a patient visited the doctor and when a patient had blood work done. I quickly noticed the following trend. <strong>Patients who visit the doctor more often have larger UPDR scores</strong>. This is shown in the following 3 plots. The first are patients who visited the doctor a normal number of times. The second are patients who visited 1 standard deviation less than normal. And the last are patients who visited 1 standard deviation more. In each plot, we display the average target value per visit month for these 3 groups of patients:<br> <img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/May-2023/normal2.png" alt=""><br> <img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/May-2023/less2.png" alt=""><br> <img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/May-2023/more2.png" alt=""></p> <h1>Feature Engineering</h1> <p>The above plots show that there is signal in the <code>patient visit dates</code>. What is the best way to extract this signal? I generated 100s of features and used <code>for-loops</code> with <strong>RAPIDS cuML SVR</strong> to find which features extract the most signal.</p> <p>In the end, simple "booleans" worked best (and the model "created its own features internally"). For each visit month, i created a boolean variable. For example for visit month = 24, i created the following "boolean":</p> <ul> <li><code>v24 = 0</code> if we know that patient did <strong>not</strong> visit on visit month = 24</li> <li><code>v24 = 1</code> if we know that patient did visit on visit month = 24</li> <li><code>v24 = -1</code> if we <strong>do not</strong> know if patient visited on month = 24</li> </ul> <p>The reason for the third category is because at each time step of Kaggle's API we are asked to predict <code>0, 6, 12, 24</code> months into the future. So if the current visit month = 12 and we are predicting visit month 36, we do <strong>not</strong> know if the patient visited during visit month = 24.</p> <h1>Single Model RAPIDS cuML - 8th Place Gold</h1> <p>A single <strong>RAPIDS cuML SVR</strong> model trained with 11 features which are <code>visit_month</code> and <code>v0</code>, <code>v6</code>, <code>v12</code>, <code>v18</code>, <code>v24</code>, <code>v36</code>, <code>v48</code>, <code>v60</code>, <code>v72</code>, <code>v84</code> where the <code>v</code> features are described above achieves <code>CV = 55.5</code> and <code>Public LB = 55.4</code> and <code>Private LB = 60.5</code>. This is 8th place Gold. Using <strong>RAPIDS cuML</strong> was great because it allowed me to experiment dozens of models in minutes!</p> <h1>Single Model TensorFlow MLP - 4th Place Gold</h1> <p>After I found the above features, i tried different model types. I tried XGBoost with <code>PseudoHuber loss</code>. It's CV was not as good as <strong>RAPIDS cuML SVR</strong>. Next I tried TensorFlow MLP with <code>MeanAbsoluteError</code>. We built an MLP with 10 hidden layers where each hidden layer has 24 units and activation Relu. We used no Dropout and no BatchNormalization. We trained it for 15 epochs with Adam optimizer <code>LR = 1e-3</code> and then 15 epochs <code>LR = 1e-4</code>. This achieves <code>CV = 55.0</code> and <code>Public LB 54.9</code> and <code>Private LB 60.1</code>. This is 4th place Gold.</p> <h1>Creating Train Data</h1> <p>Our model trained with <code>train_clinical_data.csv</code> only. Creating proper train data for the above features was not trivial. We needed to convert each row from the original train data into <strong>4 new rows</strong>. If the original row was <code>patient_id = 55</code> and <code>visit_month = 24</code>. Then we needed to replace this row with 4 new rows:</p> <ul> <li>patient_id=55, visit_month=24, And v0=X1, v6=-1, v12=-1, v24=-1, v&gt;24=-1</li> <li>patient_id=55, visit_month=24, And v0=X1, v6=X6, v12=-1, v24=-1, v&gt;24=-1</li> <li>patient_id=55, visit_month=24, And v0=X1, v6=X6, v12=X12, v24=-1, v&gt;24=-1</li> <li>patient_id=55, visit_month=24, And v0=X1, v6=X6, v12=X12, v24=X24, v&gt;24=-1</li> </ul> <p>where <code>X1</code>, <code>X6</code>, <code>X12</code>, <code>X24</code> are the values <code>0</code> or <code>1</code> based on whether <code>patient_id=55</code> visited on months 0, 6, 12, 24 in the train data. The 4 new rows are current visit month minus 0, 6, 12, 24. If any of these subtractions are not valid visit months then we don't make that row.</p> <h1>4th Place Solution Code</h1> <p>I published my 4th place submission code using TensorFlow MLP <a href="https://www.kaggle.com/code/cdeotte/4th-place-solution-public-54-9-private-60-1" target="_blank">here</a>. Enjoy!</p>
Vesuvius Challenge - Ink Detection
6th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Vesuvius Challenge - Ink Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you very much for organizing such an interesting competition. I am greatly thankful to the hosts and the Kaggle staff.</p> <p>Continuing from the previous competition, I am delighted to have won solo gold medal again, with a total of four medals (Table 1, NLP × 2, CV × 1). Additionally, it was my first time attempting the segmentation task, and I began with <a href="https://www.kaggle.com/tanakar" target="_blank">@tanakar</a> excellent notebook <a href="https://www.kaggle.com/code/tanakar/2-5d-segmentaion-baseline-training/comments" target="_blank">here</a>. I am truly grateful for that.</p> <h1>1. Summary</h1> <p>My approach involved an ensemble of EfficientNet and SegFormer models. I believed that the test data was rotated shown in other discussions, so I rotated the images during inference, which resulted in a significant boost at the beginning (Public LB 0.58 → 0.74). Additionally, my originality came from incorporating IR images into the training data, which gave me a CV score increase of 0.01 and an LB score increase of 0.01. I will now explain the details below.</p> <h1>2. Inference</h1> <h2>2.1 About test data and inference flow</h2> <p>Based on the brief LB probing and the information provided on the competition page, I had an idea of what the test data might look like. Here is an image that represents my understanding:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4001300%2Fb9c4e129fe3285876db0a7135f704edb%2Fsolution1.jpg?generation=1686795321876414&amp;alt=media" alt=""></p> <p>Therefore, I structured my inference code in the Kaggle notebook as follows:</p> <ol> <li>Concatenate(axis=1) fragments A and B.</li> <li>Rotate the image clockwise.</li> <li>Perform inference (original + h flip TTA).</li> <li>Rotate the prediction countor-clockwise back to its original position.</li> <li>Cut and encode each fragment A and B respectively.</li> </ol> <p>Of course, to reduce inference time, I skipped the inference for areas where the mask value was 0. Furthermore, instead of inferring fragment A and B separately, I concatenated them. This not only eliminated the 0 padding at the boundary between A and B but also allowed for continuous inference of the initial part of fragment B as a contiguous sequence. These led to a significant boost in my LB score (EfficientNet B4: 0.58 → 0.74)</p> <h2>2.2 Threshold</h2> <p>I believe many of you experienced the instability of the signal values. Therefore, I used the following function to rank the entire image and calculate percentiles. Then, by applying a threshold, I obtained a stable threshold value. This approach proved helpful not only during inference but also during ensemble processes. 2nd place solution also used the same way <a href="https://www.kaggle.com/competitions/vesuvius-challenge-ink-detection/discussion/417255" target="_blank">here</a></p> <pre><code>def get_percentile(array): org_shape = array.shape = array.reshape(-1) = np.arange(len(array))[array.argsort().argsort()] =/ array.max() = array.reshape(org_shape) array </code></pre> <p>For the Public LB, the optimal threshold was found to be 0.96. However, using fragment 3, I conducted a simulation to observe the correlation between the partially optimal threshold (around 10%) and the threshold for the remaining 90%. As a result, I noticed that the threshold was overfitting for the 10% portion (likely reducing noise), while for the remaining 90%, it was better to slightly lower the threshold below the optimal value (aiming for clearer extraction of text). In fact, when comparing the same model, a threshold of 0.95 performed slightly better for the private LB(but less than 0.01). For the final submission, I used different models: sub1 with a threshold of 0.96 and sub2 with a threshold of 0.95.</p> <h1>3. Training</h1> <p>The following is an overview of the training process. Similar to inference, I created three sets of data and took their averages. It should be noted that SegFormer differs from CNN as it can only utilize 3 channels. As mentioned earlier, incorporating IR images resulted in improvements in both CV and LB scores.</p> <h2>3.1 CNN + Unet</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4001300%2Fa32d067916dee939f3301b72d8daf9f8%2Fsolution2.jpg?generation=1686795342236188&amp;alt=media" alt=""></p> <h2>3.2 SegFormer</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4001300%2Fba6b7cdc7012775d09930b49179e0317%2Fsolution3.jpg?generation=1686795355254555&amp;alt=media" alt=""></p> <h2>3.3 Fine-tuned Parameters</h2> <ul> <li>Stride: image size // 4</li> <li>Optimizer: Adam</li> <li>Epochs: 20</li> <li>Early stopping: 4</li> <li>Scheduler: get_cosine_schedule_with_warmup (from transformers)</li> <li>Warm-up: 0.1</li> <li>Gradient norm: 10</li> <li>Loss function: SoftBCEWithLogitsLoss (segmentation model in PyTorch)</li> <li>Training excludes areas with a mask value of 0.</li> <li>TTA: Horizontal flip</li> </ul> <h2>3.4 Cross Validation</h2> <p>For submission1, I used a 7kfold cross-validation, and for submission2, I used a 10kfold cross-validation. Increasing the value of k-fold resulted in improvements in both CV and LB scores. I recall that increasing from 5-fold to 7-fold led to an improvement of approximately 0.1 in the Public LB score.</p> <h1>4 Final result</h1> <p>Ensemble was all mean value of predictions.</p> <p>sub1 : th 0.96, cv 0.740, public LB 0.811570, private LB 0.661339</p> <table> <thead> <tr> <th>model</th> <th>image size</th> <th>kfold</th> <th>cv</th> <th>public LB</th> <th>private LB</th> </tr> </thead> <tbody> <tr> <td>efficientnet_b7_ns</td> <td>608</td> <td>7 + fulltrain</td> <td>0.712</td> <td>0.80</td> <td>0.64</td> </tr> <tr> <td>efficientnet_b6_ns</td> <td>544</td> <td>7 + fulltrain</td> <td>0.702</td> <td>0.79</td> <td>0.64</td> </tr> <tr> <td>efficientnetv2_l_in21ft1k</td> <td>480</td> <td>7</td> <td>0.707</td> <td>0.79</td> <td>0.65</td> </tr> <tr> <td>tf_efficientnet_b8</td> <td>672</td> <td>7</td> <td>0.716</td> <td>0.79</td> <td>0.64</td> </tr> <tr> <td>segformer b3</td> <td>1024</td> <td>7</td> <td>0.738</td> <td>0.78</td> <td>0.66</td> </tr> </tbody> </table> <p>sub2 : th 0.95,cv 0.746 , public LB 0.799563, private LB 0.654812</p> <table> <thead> <tr> <th>model</th> <th>image size</th> <th>kfold</th> <th>cv</th> <th>public LB</th> <th>private LB</th> </tr> </thead> <tbody> <tr> <td>efficientnet_b7_ns</td> <td>608</td> <td>10</td> <td>0.722</td> <td>0.80</td> <td>0.65</td> </tr> <tr> <td>efficientnet_b6_ns</td> <td>544</td> <td>10</td> <td>0.720</td> <td>0.79</td> <td>0.63</td> </tr> <tr> <td>efficientnetv2_l_in21ft1k</td> <td>480</td> <td>10</td> <td>0.717</td> <td>0.79</td> <td>0.65</td> </tr> <tr> <td>segformer b3</td> <td>1024</td> <td>7</td> <td>0.738</td> <td>0.78</td> <td>0.66</td> </tr> </tbody> </table> <h2>4.1 Visualization of predictions</h2> <p>The following images visualize the predictions of submission1.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4001300%2F985af9f03d54b031cc820f29b63bc891%2Fpredictions.jpg?generation=1686889411081867&amp;alt=media" alt=""></p> <h1>5. My understanding</h1> <h2>5.1 Not working well</h2> <p>I tried several models such as ConvNext, Mask2Former, Swin Transformer + PSPNet, BeiT, and many others, but their effectiveness was not satisfactory for cv and lb. EfficientNet and mobilevit performed well and stable in this competition for me. I also experimented with SegFormer using various versions from b1 to b5. Although it showed good performance in cross-validation (CV), the leaderboard (LB) scores were poor and unstable. Five days before the end of the competition, when I plotted the relationship between CV and LB scores again, I noticed that larger models tended to overfit. They achieved good CV scores but had poor LB scores. After adjusting the layers used, I found that only b3 showed high LB scores, although I suspected it might be overfitting. When I used it in the ensemble, it significantly improved the LB scores, so I decided to include it. This discrepancy may be due to the limited amount of training data. I should have also tried regularization techniques such as dropout, freezing, and other strategies, but I ran out of time. Considering these options might have potentially improved the performance. <br> ※ These are just my guesses.</p> <h2>5.2 Potential Successes That Were Not Implemented</h2> <p>Pre-training using IR images: Although it improved the CV performance, it resulted in a decline in LB scores, so it was not implemented.<br> Including EMNIST (external data) in the training dataset: While it improved the CV performance, it led to a deterioration in LB scores, so it was not implemented.</p> <h1>6. Acknowledgments</h1> <p>I could not have achieved these results on my own. I was greatly influenced by those who I have collaborated with in the past, and I am grateful for their contributions. I would also like to express my sincere gratitude to those who have shared their knowledge and insights through previous competitions. Thank you very much.</p> <p>training code : <a href="https://github.com/chumajin/kaggle-VCID" target="_blank">https://github.com/chumajin/kaggle-VCID</a><br> inference code : <a href="https://www.kaggle.com/code/chumajin/vcid-6th-place-inference" target="_blank">https://www.kaggle.com/code/chumajin/vcid-6th-place-inference</a></p>
Vesuvius Challenge - Ink Detection
4th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Vesuvius Challenge - Ink Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Acknowledgement</h1> <p>I'm very grateful for organizing very challenging and fantastic challenge.<br> I feel like there are a lot of really great people and I respect them all.<br> I would like to express my gratitude to both the organizers and participants of the competition.</p> <p><br><br> <br></p> <h1>Solution</h1> <p>I would like to describe my solution in the order of contribution of performance improvement.<br> <strong>The number of ★ in the description below means the degree of contribution to performance improvement.</strong></p> <p><br><br> <br></p> <h3>1. Temporal random crop &amp; random paste &amp; random cutout (★★★★★)</h3> <h4>1.1. Thinking about the data itself</h4> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5725749%2Fb9819315244312a47802b241517dc57e%2F4.PNG?generation=1686971322910074&amp;alt=media" alt=""></p> <p>As you can see in the figure above, i thought that particular layers of fragment would not correspond to the same layers of other fragment. Using these points, i came up with an augmentation method that can give strong regularization to the model by reflecting the characteristics of the data.</p> <h4>1.2. Application</h4> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5725749%2F8601353d50ccdc56721bfc1341395bff%2F8.PNG?generation=1686981811287731&amp;alt=media" alt=""></p> <p>As you can see in the figure above, there are 3 steps.<br> 1) temporal random crop</p> <ul> <li>First, I decided to use a total of 22 layers (21-42) out of 65 layers. Of these 22 layers, layers with a range of cropping_min to cropping_max are cropped randomly. After several ablation study, I set cropping_min = 12, cropping_max = 22.</li> </ul> <p>2) random paste</p> <ul> <li>The randomly cropped layers are attached to a random area of 22 while maintaining sequential information.<br> <strong>This crop &amp; paste method allows the model to learn wide, various range of layers rather than a fixed area of fragments, allowing generalization to learning various fragments.</strong></li> </ul> <p>3) random cutout</p> <ul> <li>Similar to spatial cutout augmentation, I applied temporal cutout augmentation. Among the pasted layers, 0 to 2 random layers are filled with 0 values.</li> </ul> <p>The above three processes can be seen simply by looking at the Pytorch Dataset code part below.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5725749%2F2cb3e87617e91bbff8df5dbdf213838e%2F9.PNG?generation=1686982042582252&amp;alt=media" alt=""></p> <p><br><br> <br></p> <h3>2. Weight choice with low false positive (★★★★)</h3> <p>As shown in the figure below, I saved the predicted mask for each epoch. The file name has the epoch, score, tp, fp, and fn, of which the fp value was important. <strong>Even if the score was similar, if the fp was large, it tended to be very bad in the public score.</strong> Therefore, I wanted to select a weight with a small fp value and a high score value, and I selected an appropriate fp value for each fold through several submissions. In addition, I tried to increase the generalization performance by ensemble models with a range of fp values.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5725749%2Faa46536cf349929607790d1c3a32a21c%2F55.PNG?generation=1687062720342906&amp;alt=media" alt=""></p> <p><br><br> <br></p> <h3>3. Used models &amp; ensemble (★★★★)</h3> <p>I ensembles 3 models.</p> <ul> <li><p>3D resnet152, 3D resnet200, 3D resnext101 with unet-like decoder.</p></li> <li><p>Refered models was firstly inspired by JEBASTIN NADAR (<a href="url" target="_blank">https://www.kaggle.com/code/samfc10/vesuvius-challenge-3d-resnet-training</a>) and applied. Thanks for your effort and respect you. <br> 3D resnet152, 3D resnet200 is from <a href="https://github.com/kenshohara/3D-ResNets-PyTorch" target="_blank">https://github.com/kenshohara/3D-ResNets-PyTorch</a><br> 3D resnext101 is from <a href="https://github.com/okankop/Efficient-3DCNNs" target="_blank">https://github.com/okankop/Efficient-3DCNNs</a><br> All models have mit licenses and are not against the rules.</p></li> <li><p>3D Resnet was pretrained on Kinetic 710, and 3D resnext101 was pretrained on Kinetic 600.<br> <strong>There was a huge difference between being trained in kinetics and not being able to.</strong> Therefore, during the competition, I tried to find a model trained in the kinetic 600 or 700.</p></li> <li><p>All models are combined unet-like simple decoder. Features from 3D encoder are upsampled and concatenated using decoder.<br> details are in code.</p></li> </ul> <p><br><br> <br></p> <h3>4. Label smoothing(0.3), cutmix augmentation, data clipping (★★★)</h3> <ul> <li>Cross entropy with label smoothing improved performance. After doing ablation study, i set the parameter of label smoothing to 0.3</li> <li>Cutmix augmentation improved performance in cross validation &amp; public score</li> <li>Inspired by AJLAND(<a href="url" target="_blank">https://www.kaggle.com/code/ajland/eda-a-slice-by-slice-analysis</a>), I clipped the image below 50 and more than 200. Also thanks for your effort and respect you.<br> (image = np.clip(image, 50, 200))</li> </ul> <p><br><br> <br></p> <h3>5. Belief for local cross validation (★★)</h3> <p>As shown in the table below, the two models are the ones I submitted finally.<br> <strong>Ensemble 2 had a lower public score than ensemble1, but performed better in cross validation.</strong> There were other models that could be adopted, but considering that it came out well in the cross validation, it was adopted, and <strong>the performance was improved more in the private, resulting in a better first place.</strong></p> <table> <thead> <tr> <th>models</th> <th>cv score</th> <th>threshold</th> <th>public score</th> <th>private score</th> </tr> </thead> <tbody> <tr> <td>Ensemble 1</td> <td>0.66 / 0.762/ 0.723 / 0.68</td> <td>0.5</td> <td>0.795835</td> <td>0.663542</td> </tr> <tr> <td>Ensemble 2</td> <td>0.67 / 0.764 / 0.724 / 0.69</td> <td>0.47</td> <td>0.789024</td> <td>0.674544</td> </tr> </tbody> </table> <p><br><br> <br></p> <h3>6. Other training details (★)</h3> <p><br></p> <h4>6.1. loss, optimizer, scheduler</h4> <ul> <li>loss : CrossEntropyLoss with label smoothing 0.3</li> <li>optimizer = AdamW(lr=1e-4)</li> <li>scheduler = cosine annealing with warmup</li> </ul> <p><br></p> <h4>6.2. cross validation strategy &amp; result</h4> <p>I trained the model with 4fold</p> <ul> <li>fold1 : fragment 1</li> <li>fold3 : fragment 3</li> <li>fold2, 4 : fragment 2 divided into 2 sub-frag(9506 x 14830 -&gt; (4300 x 14830) &amp; (5206 x 14830))</li> </ul> <p>The results are shown in the table below.<br> Ensemble means ensemble of 3 models with same weight.</p> <table> <thead> <tr> <th>models</th> <th>cv score</th> <th>threshold</th> <th>public score</th> <th>private score</th> </tr> </thead> <tbody> <tr> <td>3D Resnet152</td> <td>0.64 / 0.71 / 0.71 / 0.69</td> <td>0.5</td> <td>0.78</td> <td>-</td> </tr> <tr> <td>3D Resnet200</td> <td>0.66 / 0.71 / 0.69 / 0.66</td> <td>0.5</td> <td>0.77</td> <td>-</td> </tr> <tr> <td>3D Resnext101</td> <td>0.61 / 0.72 / 0.71 / 0.64</td> <td>0.5</td> <td>0.77</td> <td>-</td> </tr> <tr> <td>Ensemble</td> <td>0.66 / 0.762/ 0.723 / 0.68</td> <td>0.5</td> <td>0.795835</td> <td>0.663542</td> </tr> <tr> <td>Ensemble</td> <td>0.67 / 0.764 / 0.724 / 0.69</td> <td>0.47</td> <td>0.789024</td> <td>0.674544</td> </tr> </tbody> </table> <p><br></p> <h4>6.3. image augmentation</h4> <p><code>all probability set to 0.6</code></p> <ul> <li>Horizontal, Vertical Flip</li> <li>RandomGamma(limit=(50, 150))</li> <li>RandomBrightnessContrast(brightness=0.2, contrast=0.2)</li> <li>Gaussian Noise(10, 30), Gaussian blur</li> <li>shift 0.1, scale 0.1, rotate 360</li> <li>coarsedropout(holes=4, size=0.2 * image_size)</li> </ul> <h4>6.4 Data Preparation</h4> <ul> <li>stride rate 3</li> <li>image size 256</li> </ul> <h4>6.5 etc</h4> <ul> <li>To achieve consistent and stable results, I selected the best approach based on threshold 0.5.</li> <li>inference stride is (256 / 5)</li> </ul> <p><br><br> <br></p> <h3>7. Tried but not worked</h3> <ul> <li><p>mixup augmentation</p></li> <li><p>temporal channel shuffle augmentation</p></li> <li><p>3D transformer encoder(uniformerv2, video swin transformer) - cannot training… i don't know why loss didn't decrease.</p></li> <li><p>spatial TTA</p></li> <li><p>temporal TTA - because of my crop&amp;paste method mentioned above, i tried temporal TTA. In other words, I tried tta by cropping 22, 20, and 18 layers in various ways and then averaging 3 predictions, sometimes performance increased and sometimes decreased. I didn't apply it because it took too much time.</p></li> <li><p>other losses(tversky focal loss, fbeta loss)</p></li> </ul> <p><br><br> <br></p> <h3>8. Code</h3> <p>8.1. Github version(based on Docker)</p> <ul> <li><a href="https://github.com/AhnHeeYoung/Competition/tree/master/kaggle" target="_blank">https://github.com/AhnHeeYoung/Competition/tree/master/kaggle</a></li> </ul> <p>8.2. Kaggle Notebook version</p> <ul> <li>Training : <a href="https://www.kaggle.com/code/ahnheeyoung1/ink-detection-training" target="_blank">https://www.kaggle.com/code/ahnheeyoung1/ink-detection-training</a></li> <li>Inference : <a href="https://www.kaggle.com/code/ahnheeyoung1/ink-detection-inference/notebook?scriptVersionId=136610637" target="_blank">https://www.kaggle.com/code/ahnheeyoung1/ink-detection-inference/notebook?scriptVersionId=136610637</a></li> </ul>
Benetech - Making Graphs Accessible
1st Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Benetech - Making Graphs Accessible <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, I would like to pay tribute to all the participants who worked on this competition. <br> I would also like to thank the hosts for organizing this interesting task competition. <br> This task was very interesting and I enjoyed working on it because I could think of many different approaches. <br> I am honored to have won first place in this very interesting competition.</p> <h1>Overview</h1> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3823496%2F16283ff691149c87b3b01a49a432f107%2Foverview.jpg?generation=1687442367636860&amp;alt=media" alt="Overview"></p> <p>My solution consists of a two-step pipeline that first classifies chart types using a classification model and then performs data series inference. <br> In the inference phase of the data series, Bar, Line, and Dot were end-to-end predictions by Deplot trained for each chart type, while Scatter was predicted by an object detection-based approach. </p> <p>The final scores are as follows. </p> <table> <thead> <tr> <th></th> <th>Overall</th> <th>Scatter</th> <th>Dot</th> <th>Line</th> <th>Vertical Bar</th> <th>Horizontal Bar</th> </tr> </thead> <tbody> <tr> <td>public</td> <td>0.86</td> <td>0.10</td> <td>0.00</td> <td>0.32</td> <td>0.39</td> <td>0.05</td> </tr> <tr> <td>private</td> <td>0.72</td> <td>0.30</td> <td>0.01</td> <td>0.13</td> <td>0.26</td> <td>0.01</td> </tr> </tbody> </table> <h1>Dataset</h1> <p>I used the following three data sets. </p> <ol> <li><p>Competition data set(comp_extracted_dataset/comp_generated_dataset) </p> <ul> <li>Using both extracted dataset (comp_extracted_dataset) and generated dataset (comp_generated_dataset). </li> <li>For the generated data, data with noise in the annotations were removed by a simple check (about 100 images). </li></ul></li> <li><p>ICDAR data set(ICDAR_dataset/ICDAR_manualannot_dataset) </p> <ul> <li>I used 1406 pieces of data for which annotations were provided (ICDAR_dataset) and 1903 pieces of data for which no annotations were provided (ICDAR_manualannot_dataset). </li> <li>For data for which annotations were provided, I visually rechecked the annotation contents and manually corrected all data that did not follow the competition's annotation rules (e.g., % notation) or contained noise. </li> <li>For the data for which no annotations were provided, I first visually checked the appearance of all the data and selected the data that could be used in this competition. Next, I inferred and assigned pseudo-labels using the Deplot model, visually checked all the results again, and manually corrected all incorrect annotations. </li></ul></li> <li><p>Generated synthetic data set(synthetic_dataset) </p> <ul> <li>After reviewing the images in the competition dataset and determining that the comp_generated_dataset alone did not have a sufficient variation to achieve robustness, so I generated about 65k synthetic data myself. </li> <li>I mainly generated synthetic data with features that comp_generated_dataset does not have. <ul> <li>histogram </li> <li>Label contains line breaks </li> <li>Bar chart with error bars </li> <li>Line chart containing x labels not included in the data series</li> <li>etc… </li></ul></li> <li>To add diversity, I also included 10k images from the dataset published by <a href="https://www.kaggle.com/brendanartley" target="_blank">@brendanartley</a>. </li></ul> <p>(Example of synthetic data I generated) <br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3823496%2Fd4df82a41fcaebeda2ffc93fb8ad9159%2Fgenerated_data.jpg?generation=1687442722440643&amp;alt=media" alt="generated_data"></p></li> </ol> <h1>Solution pipeline</h1> <h2>1. Chart classification</h2> <ul> <li>There is not much to note, as I just performed a simple classification task. </li> <li>I used two models, convnext_large_384 and swin_large_patch4_window12_384, and integrated their results in a weighted ensemble. </li> </ul> <p><strong>Training configs</strong> </p> <ul> <li>Dataset (training data and validation data were randomly divided into folds) <ul> <li>Training(78k images):comp_extracted_dataset, comp_generated_dataset, ICDAR_dataset, synthetic_dataset(horizontal_bar/dot) </li> <li>Validation(4k images):comp_extracted_dataset, ICDAR_dataset, synthetic_dataset(horizontal_bar/dot)</li></ul></li> <li>Hyperparameters<ul> <li>Epochs:15 </li> <li>Batch size:16 </li> <li>Adafactor (lr=3e-5) </li></ul></li> </ul> <h2>2. Data series prediction</h2> <p>Based on the classification results of Step 1, different approaches are used for scatter and the rest. <br> Except for scatter, the predictions are made in Deplot, and scatter is an object detection-based prediction. </p> <h3>2-a. Bars &amp; Lines &amp; Dots</h3> <ul> <li><p>I experimented by switching the model to Deplot based on the Donut-based notebook that <a href="https://www.kaggle.com/nbroad" target="_blank">@nbroad</a> had published. However, I could not train well just by changing the model, so I made some modifications. </p> <ul> <li>Ground Truth Format <br> Removed chart type from GroundTruth and adapted the format to that of the Deplot original. <br> <code>&lt;0x0A&gt; x_value1 | y_value1 &lt;0x0A&gt; x_value2 | y_value2 &lt;0x0A&gt; x_value3 | y_value3 &lt;/s&gt;</code></li> <li>x-axis and y-axis swap of horizontal_bar <br> Since the concept of x-axis and y-axis was reversed in the annotation rules of this competition and the original format of Deplot, I trained according to the original concept of Deplot and swapped the values during inference. </li></ul></li> <li><p>Training was conducted in multiple stages. In the first stage, training was conducted using data from all chart types (All Chart-type Train), and using the results of that training as initial weights, one or two additional training sessions (Specific Chart-type Train) were conducted using only specific chart-type images to generate a model specialized for a chart type.</p> <ul> <li>The scores for vertical_bar and line were slightly improved by this method. (Two Specific Chart-type Train runs were performed for vertical_bar and one for line.) </li> <li>Horizontal_bar had a worse cv after the second stage of training, probably due to the small amount of extracted data, so I decided to use the All Chart-type Train model to make predictions. </li> <li>Dot chart only has generated data and I decided that I could not validate it successfully, so I decided not to train after the second stage and use the All Chart-type Train model to make predictions. <br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3823496%2F0a86b394e87edd440600ac531288897c%2FDeplot_training.jpg?generation=1687442833429916&amp;alt=media" alt="Deplot_training"></li></ul></li> </ul> <p><strong>Training configs</strong> </p> <ol> <li><p>All Chart-type Train </p> <ul> <li>Dataset <ul> <li>Train(120k images):comp_extracted_dataset, comp_generated_dataset, ICDAR_dataset, <br> synthetic_dataset </li> <li>Validation(2.5k images):comp_extracted_dataset, ICDAR_dataset, ICDAR_manualannot_dataset</li></ul></li> <li>Hyperparameters<ul> <li>Epochs:8</li> <li>Batch size:2 </li> <li>Adafactor (lr=1e-5)</li> <li>cosine scheduler with warmup (warmup_step=4000)</li> <li>Augmentation: GaussianBlur, GaussNoise, some color augmentations </li></ul></li></ul></li> <li><p>Specific Chart-type Train (vertical_bar) </p> <ul> <li><p>Dataset <br> [1st time] </p> <ul> <li>Train(6k images) :comp_extracted_dataset, ICDAR_dataset, synthetic_dataset </li> <li>Validation(1.3k images):comp_extracted_dataset, ICDAR_dataset, ICDAR_manualannot_dataset </li></ul> <p>[2nd time] </p> <ul> <li>Train(1500 images):comp_extracted_dataset, ICDAR_dataset, ICDAR_manualannot_dataset </li> <li>Validation(500 images) :comp_extracted_dataset, ICDAR_dataset, ICDAR_manualannot_dataset</li></ul></li> <li><p>Hyperparameters</p> <ul> <li>cosine scheduler with warmup (warmup_step=0)</li> <li>Otherwise, same as All Chart-type Train </li></ul></li></ul></li> <li><p>Specific Chart-type Train (line) </p> <ul> <li><p>Dataset <br> [1st time] </p> <ul> <li>Train(1150 images):comp_extracted_dataset, ICDAR_dataset, ICDAR_manualannot_dataset </li> <li>Validation(400 images):comp_extracted_dataset, ICDAR_dataset, ICDAR_manualannot_dataset </li></ul></li> <li><p>Hyper paramete</p> <ul> <li>cosine scheduler with warmup (warmup_step=0)</li> <li>Otherwise, same as All Chart-type Train </li></ul></li></ul></li> </ol> <p><strong>Score</strong> </p> <ul> <li><p>All Chart-type Train</p> <table> <thead> <tr> <th></th> <th>Overall</th> <th>Scatter</th> <th>Dot</th> <th>Line</th> <th>Vertical Bar</th> <th>Horizontal Bar</th> </tr> </thead> <tbody> <tr> <td>public</td> <td>0.78</td> <td>0.06</td> <td>0.00</td> <td>0.29</td> <td>0.38</td> <td>0.05</td> </tr> <tr> <td>private</td> <td>0.53</td> <td>0.13</td> <td>0.01</td> <td>0.12</td> <td>0.26</td> <td>0.01</td> </tr> </tbody> </table></li> <li><p>Specific Chart-type Train</p> <table> <thead> <tr> <th></th> <th>Overall</th> <th>Scatter</th> <th>Dot</th> <th>Line</th> <th>Vertical Bar</th> <th>Horizontal Bar</th> </tr> </thead> <tbody> <tr> <td>public</td> <td>0.81</td> <td>0.06</td> <td>0.00</td> <td>0.32</td> <td>0.39</td> <td>0.05</td> </tr> <tr> <td>private</td> <td>0.55</td> <td>0.13</td> <td>0.01</td> <td>0.13</td> <td>0.26</td> <td>0.01</td> </tr> </tbody> </table></li> </ul> <h3>2-b. Scatter</h3> <ul> <li>Only scatter was not accurate enough with Deplot no matter how I tried, so I adopted an object detection-based approach. </li> <li>Since scatter is guaranteed to have label values of numeric type, it is easy to calculate values from ratios if the "label position," "label text," and "scatter point position" are known. I used CACHED to detect "label position," Deplot to read "label text," and YOLOX to detect "scatter point". </li> <li>The mapping between label bboxes and label texts is a bit crude, but the smallest label text is mapped to the left-most (top in the case of y-labels) label bbox, and the largest label text is mapped to the right-most (bottom in the case of y-labels) label bbox. It is possible that the total number of detected label bboxes does not match the total number of read label texts, but we did not take any special measures because such a pattern was very rare when we checked the verification data. </li> <li>The above object detection approach resulted in a scatter score of <strong>Pulic:0.10/Private:0.30</strong>, much better than the Deplot-based score (Pulic:0.06/Private:0.13).</li> </ul> <p><strong>More Details</strong> <br> Scatter point detection </p> <ul> <li>comp_extracted_dataset, ICDAR_dataset, synthetic_dataset were used for training data (about 12.5k images)</li> <li>Somewhat semi-automated using provided annotation data and image processing, but inevitably noisy, so corrected with manual annotation. </li> <li>To increase detection accuracy, only the plot area was cropped for training and inference.</li> <li>The model was YOLOX-l, input image size was 1280 for both training and inference, and 50 epochs were trained.</li> </ul> <p>Label text reading</p> <ul> <li>comp_extracted_dataset, ICDAR_dataset, comp_generated_dataset were used for training data (about 62k images, all chart types were used, not just scatter) </li> <li>Use the same training configs as in (2-a) All_Chart-type_Train for the training parameters, and change the format of Ground Truth as follows <br> <code>&lt;0x0A&gt; x_label1 | x_label2 | x_label3 &lt;0x0A&gt; y_label1 | y_label2 &lt;/s&gt;</code> </li> <li>At first, I thought of using EasyOCR or similar to achieve this, but there were many issues to be addressed, such as support for character rotation, and it seemed that tuning would be very difficult to increase versatility, so I adopted a method of batch reading with Deplot. </li> </ul> <hr> <p>Finally, I would like to thank once again everyone involved in this competition. <br> Thank you very much for taking the time to read this!</p> <p>[edit] </p> <ul> <li>I have published 1st place <a href="https://www.kaggle.com/code/kashiwaba/benetech-1st-place-inference" target="_blank">inference notebook</a>(23/6/28) </li> <li>I have published 1st place <a href="https://github.com/KASSII/benetech_1st_place_train" target="_blank">training code</a>(23/7/8) </li> </ul>
Lux AI Season 2
7th place IL-based solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Lux AI Season 2 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hello</p> <p>Sharing my 7th place imitation learning solution. Bidding model trained on ry_andy_ episodes. Factory placement and unit/factory actions trained on flg's episodes. </p> <p><a href="https://github.com/skyramp/luxai_season2" target="_blank">https://github.com/skyramp/luxai_season2</a></p>
Predict Student Performance from Game Play
Solution focusing on efficiency LB
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predict Student Performance from Game Play <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First, I'd like to congratulate all prize/medal winners and to thank the organizers for such a chance for newbies in the field of machine learning to gain experience and experts to increase theirs, too. <br> Second, I want to share my simple idea of my latest submissions so that I can know the feedback of others who have more experience than me.<br> As an introduction, I joined the competition solo and my goal was to learn by doing a real project, even if the results are not good or satisfactory, tried a lot of things and learnt how to make successful submission and a working model in a relatively long time :D.<br> But at some point, I checked the efficiency LB and found out that my position in it is (when compared to mine in the main LB) is way better, so I started to focus more on making simple, no time-consuming ideas that get a public score that is not that bad.<br> My feature engineering concentrated on the idea that, maybe, the actions of players in the main game who answer a specific question a right or wrong answer is somehow similar, focusing more on the elapsed time they consume in such actions.<br> So, I developed for each question 2 different models that ensemble each other, depending on the following:<br> 1st model: elapsed time in each ‘room_fqid’ &amp; ‘level’ group unique values<br> 2nd model: length of each group in ‘fqid’ texts divided by the length of the whole group. As the elapsed time difference in this column for group wouldn’t be telling a lot, as the same values doesn’t appear consecutively.</p> <p>The features are entered into LGBMClassifier with pre trained &amp; tuned hyperparameters.<br> The voting or weights of each model for each question is variable with the questions’ numbers, depending on which weights perform better on a small scale of the same training data, and that’s it!</p> <p>My code runs in about half a minute and takes between 4 and 6 minutes to complete scoring with 0.689 highest private LB score in my selected submissions, my highest record was 0.693 in about 9 minutes scoring + running time but unfortunately I didn't choose this submission :) </p> <p>I’d like to know about everyone’s feedback, as I am trying to evaluate my experiment in learning through doing real projects. Thank you again and wish me luck in the final efficiency LB :D!</p> <p>Link for my code example: <a href="url" target="_blank">https://www.kaggle.com/code/belalemadhussein/simple-highly-efficient-model</a></p>
Image Matching Challenge 2023
9th Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Image Matching Challenge 2023 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Great thanks to the organizers and Kaggle staff for this amazing competition.<br> Our solution shares architectural similarities with the baseline provided by the organizers. <br> Firstly, we addressed the issue of rotation in-variance by implementing a rotation model to standardize the orientation of input images. Next, we employed a neural network for the retrieval task, enabling us to extract matching pairs for the feature extraction and matching process. This process generated a database which served as the input for the incremental mapping in Colmap.</p> <h2>Orientation Model</h2> <p>We would like to express our gratitude to <a href="https://www.kaggle.com/iglovikov" target="_blank">@iglovikov</a> for developing this great <a href="https://github.com/ternaus/check_orientation">model</a> , which was trained on a large dataset and exhibited good performance in both (LB) and (CV) evaluations. We made a slight modification to this model, which we refer to as self-ensembling.<br> In our proposed adjustment, we utilized the model iteratively on the query image and its different rotated versions. Subsequently, we tuned the threshold by conducting validation tests on more than 1500 images with different rotation angles. Through experimentation, we discovered that using a threshold between 0.8 and 0.87 might result in incorrect orientation predictions. To address this issue, we applied self-ensembling by further rotating the image and checking if the predicted class probability fell within this range of thresholds. We repeated this last step twice to ensure accuracy.</p> <h2>Retrieval Method</h2> <h3>Retrieval Model</h3> <p>In order to address the challenges posed by the cost inefficiency of exhaustive matching and the limitations of sequential matching for 3D reconstruction, we sought alternative methods for image retrieval. <br> After careful evaluation, we chose to utilize NetVlad as our chosen method due to its superior performance compared to openibl and cosplace. To further enhance the results, we employed various techniques, including:<br> 1) We passed the original image and its horizontally flipped version through the model. By summing the descriptors obtained from both passes, we achieved improved performance, particularly for highly similar images. This technique is especially effective in cases where the scene exhibits symmetry. By redefining a new point in the n-dimensional descriptive space, which effectively increase the similarity distance between two distinct parts of the scene.</p> <p>2) Re-ranking: After calculating the similarity scores, we performed re-ranking by selecting the top 1 match. We then re-queried the retrieval process using these two images instead of just one. The similarity scores of the resulting matches were summed together after raising them to a specific power "m". This manipulation of probabilities ensures that if the best match for one of the query images is found, it will be favored over an image that is similar to both where the sum of the similarities on both scenarios is equal.</p> <p>We repeated this procedure twice using NetVlad with Test Time Augmentation (TTA) on the image size. The results were nearly perfect, and the approach even enabled the correct ordering of The Wall scene pairs, which can be best matched by doing sequential matching.</p> <h3>Number of Matches</h3> <p>Determining the number of image pairs to select is a critical factor that directly affects both validation and leaderboard performance. This becomes particularly important for large scenes where there may be no or very few common images between subsets of the scene.</p> <p>To address this challenge, we devised a strategy based on a complete graph representation. In this graph, each edge represents the similarity between two images. The goal is to choose a subset of edges where each image has an equal number of connected nodes.</p> <p>We employed a Binary Search approach to determine the number of matches for each image, with a check function to verify if the resulting graph is connected or not. The lower bound of the binary search was set to half the number of images, ensuring that we consider common matches and prevent incomplete 3D model reconstruction. Additionally, we made sure that the approach remains exhaustive for small scenes containing less than 40 images.</p> <p>By employing this method, we aimed to strike a balance between capturing sufficient matching pairs for accurate 3D reconstruction while avoiding redundant or disconnected image subsets.</p> <h2>Feature Extraction and Matching</h2> <p>In our selected submissions, we have utilized the SuperPoint and SuperGlue algorithms followed by MagSac filtering. Unfortunately SuperGlue is not licensed for commercial use. However, we have also achieved highly promising results using GlueStick. We have made modifications to the GlueStick architecture to integrate it with SuperPoint, and we achieved a score of approximately 0.450 on the public leaderboard and 0.513 on private leaderboard without employing our best tuning parameters. It is worth noting that this modified architecture is permitted for commercial use and offers improved processing speed. We anticipate that further tuning can yield even better results with GlueStick, but didn't choose it as our last submissions.</p> <h2>Refinement</h2> <p>Although not included in our selected submissions, we would like to mention an approach that significantly improved validation results across various scenes. We employed Pixel-Perfect-SFM in conjunction with sd2net for dense matching, but it didn't improved the results on public leaderboard.</p> <h2>Registering Unregistered Images</h2> <p>While not part of our selected submissions, we made attempts to register unregistered images using various techniques. However, these attempts did not yield significant improvements on the leaderboard. We explored the following strategies:</p> <ul> <li>Utilizing different orientations and attempting registration.</li> <li>Applying different extractor-matchers, such as LoFTR and R2D2, for the unregistered images.</li> <li>Adjusting parameters for SuperPoint and SuperGlue to optimize the registration process.</li> </ul> <h2>Tried but not worked</h2> <ul> <li>Pixel-Perfect-SFM</li> <li>Semantic Segmentation masks</li> <li>Illumenation enhancement</li> <li>PnP localization for unregistered Images</li> <li>LoFTR/ SILK / DAC / Wireframe</li> <li>CosPlace / OpenIBL</li> <li>Large Image splitting into Parts</li> <li>Grid-based Point Sampling (equally spatial replacement of points in an image).</li> <li>Rotation self-ensemble averaging</li> <li>Filtering the extracted pairs using a threshold from a binarysearch function or a fixed threshold.</li> </ul> <h2>Important Notes:</h2> <p>1) Most participants were using colmap reconstruction, which is nondeterministic. We were able to modify the code so that it became deterministic but working only one CPU thread, which helped us observe our improvement and avoid randomness.<br> 2) We found out that OpenCV and PIL libs use EXIF Information when reading images, unless providing flags to prevent it. By that, we mean that if the orientation in EXIF will rotate the image automatically before processing. This was confusing for us, as there are missing information about how the GT were collected for rotation part (with or without considering them), that's why one of our chosen solutions included this correction to overcome such issue, plus we lost a lot of submission to check the effect of this issue on leaderboard. It would have been more helpful if there was better explanation about how the GT calculated.<br> 3) Our validation scored 0.485 on public and 0.514 on private, with local score of 0.87.<br> 4) The fact that validation and leaderboard were not correlated made things more difficult and random, as it was clear there will be a shake up due to the fact that for some specific scene some solutions might fail which will drastically impact the mAA score.</p> <h2>Acknowldgment</h2> <ul> <li>I wanted to take a moment to express my sincere appreciation for the exceptional contribution and dedication to my friend and teammate <a href="https://www.kaggle.com/ammarali32" target="_blank">@ammarali32</a> . His hard work and commitment have played a crucial role in our collective success, and I am truly grateful for the opportunity to work alongside such a remarkable individual. we had fun and we learned a lot.</li> <li>Special thanks to the <a href="https://github.com/cvg" target="_blank">Computer Vision and Geometry Lab</a> Group for (hloc, gluestick, pixel perfect sfm ..)</li> </ul> <h2> </h2>
Image Matching Challenge 2023
5th place solution: kNN shortlist and rotation correction
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Image Matching Challenge 2023 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I will be in Vancouver for the CVPR IMW. I am very much looking forward to learning from you all!</p> <h2>Overview</h2> <p>Basically, the stereo matching part is the same as that of the winning solution in 2022, <a href="https://www.kaggle.com/competitions/image-matching-challenge-2022/discussion/329131" target="_blank">the crop and multi-scale ensemble composition by DBSCAN</a>. In addition to this, the improvement of the shortlist, the improvement of computational efficiency, and rotation correction are the main features of my solution.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6388%2F522450db593ec4545cbb3ab6d3f59cd6%2Fmatcher.png?generation=1686648814730059&amp;alt=media" alt=""></p> <h2>kNN shortlist + completion</h2> <p>In the baseline, the euclidean distance of the global descriptor is used to filter stereo matching candidates by a threshold value. However, since the scale of the distance differs among datasets, I thought it would not be appropriate to use a common threshold value. It’s not robust to unknown data sets.</p> <p>Since the number of images per scene is only about 200 at most, stereo matching and RANSAC verification can be performed <strong>using a very lightweight model</strong> for all image combinations. The shortlist was generated by extracting the k nearest neighbors for each image based on the number of inliers.</p> <p>The very lightweight model may not match well enough. Some scenes produce images that do not match any of the images. This is fatal because it prevents camera pose estimation. Therefore, image pairs were complementarily added to the shortlist so that there are at least k neighbors for all images by the number of matching keypoints.</p> <p><strong>The very lightweight model</strong>: I used SPSG as the very lightweight model. The number of keypoints was set to 512 for efficiency as shown in Fig. 11, Appendix B of superglue's paper.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6388%2F6f82729162d57442693ff3fa2b408209%2Finference_time.png?generation=1686630524341903&amp;alt=media" alt=""></p> <h2>Rotation correction</h2> <p>A notable weakness of superglue's pretrained model is its lack of robustness to rotation. This is especially noticeable in the heritage dataset of the IMC2023 training set, cyprus scene. I addressed this issue without training by just rotating images.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6388%2F212bfdaf476e1ec79efa5692162d99d2%2Fcyprus1.png?generation=1686630454984938&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6388%2F75ce985499c56086b721df6a3e63e5d1%2Fcyprus2.png?generation=1686630470817962&amp;alt=media" alt=""></p> <p>Specifically, 4 different rotated images were prepared and matched using the very lightweight model. All images were resized to the same size (840x840) and processed in one batch.</p> <h2>Parallel Execution</h2> <p>GPU-intensive and CPU-intensive tasks can be executed in parallel for efficiency. Specifically, COLMAP BA is a CPU-intensive task, but stereo matching is a GPU-intensive task. Therefore, I implemented COLMAP processing in a separate thread and run it in parallel with stereo matching.</p> <h2>Local Validation</h2> <p>Only some difficult scenes were verified in the local environment. In particular, dioscuri, which has 174 images, was used as a reference to adjust the algorithm so that it would not time out.</p> <table> <thead> <tr> <th>scene</th> <th># images</th> <th>score</th> <th>time (local env)</th> </tr> </thead> <tbody> <tr> <td>bike</td> <td>15</td> <td>0.9342</td> <td>134 s</td> </tr> <tr> <td>kyiv-puppet-theater</td> <td>27</td> <td>0.7781</td> <td>291 s</td> </tr> <tr> <td>cyprus</td> <td>30</td> <td>0.6239</td> <td>312 s</td> </tr> <tr> <td>wall</td> <td>43</td> <td>0.4753</td> <td>851 s</td> </tr> <tr> <td>dioscuri</td> <td>174</td> <td>0.8775</td> <td>2045 s</td> </tr> </tbody> </table> <h2>Processing time</h2> <p>My best submisison run time was <strong>7 hours and 40 minutes</strong>. This is well under the time limit of 9 hours. Therefore, I added LoFTR to ensemble on the last day, but the Kaggle server is broken and the notebook is still running. If the notebook had been successfully processed, My best submission could have been a little better.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6388%2F538925ab8b7ece79f2630f2a9f18708e%2Ferrors.png?generation=1686630579463016&amp;alt=media" alt=""></p> <p>The issue of not being able to submit on the last day is very stressful. Something similar occurred with Google Landmark Recognition in 2019. I pray that this kind of trouble will not occur again🙏</p> <h2>Thing that didn't work</h2> <p>Too many things to try to do alone :-)</p> <ul> <li>TTA … no significant improvement</li> <li>Pixel-Perfect SfM .. I got a slight improvement in my local environment but gave up because I couldn't set up ceres in my kaggle notebook</li> <li>TensorRT .. I had to downgrade from PyTorch 2.0, but the setup was successful. I can't say with much certainty, but at least my experiments did not improve execution speed.</li> <li>Half precision … It was about 10%(?) faster, but mAA was worse, so it was not adopted in the end.</li> <li>other matchers … Tried DKMv3, LoFTR, Silk, but finally used only SPSG and MatchFormer due to high mAA.</li> <li>OpenGLUE … Trained DISK+OpenGLUE and SuperPoint+OpenGLUE on the MegaDepth dataset but did not reach the scores of the SPSG pretrained model. Converted phototourist dataset to OpenGLUE format and built MegaDepth + phototourist dataset, but did not validate it because of lack of time.</li> <li>Incremental Mapper parameter tuning … contributed to the stable high score in the local environment. Specifically, I reduced the max refinement change of BA and increased the max num iteration. However, all the submissions to kaggle were out of memory, so I did not check the improvement on the leaderboard.</li> </ul>
Image Matching Challenge 2023
2nd Place Solution for the IMC 2023 – Win Over COLMAP Randomness?!
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Image Matching Challenge 2023 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1><strong>Bonus Point</strong></h1> <p>We presented our solution at <a href="https://image-matching-workshop.github.io/" target="_blank">CVPR 2023</a>. Watch on YouTube <a href="https://youtu.be/9JpGjpITiDM?si=l7pGDPw4vOnZsH9H&amp;t=13519" target="_blank">here</a></p> <h1><strong>Intro</strong></h1> <p>Our team would like to deeply appreciate the Kaggle staff, Google Research, and Haiper for hosting the continuation of this exciting image matching challenge, as well as everyone here who compete and shared great discussions. My congratulations to all participants!</p> <p>The work we describe here is truly a joint effort of <a href="https://www.kaggle.com/yamsam" target="_blank">@yamsam</a>, <a href="https://www.kaggle.com/remekkinas" target="_blank">@remekkinas</a>, and <a href="https://www.kaggle.com/vostankovich" target="_blank">@vostankovich</a>. I’m grateful of being a part of this hardworking, cohesive, and skilled team. Thanks a lot, guys! I learned a lot from you.</p> <h6>We enjoyed this competition!</h6> <p>The best submission that was used for final scoring in private LB finished on the last day of the competition. We weren’t completely sure about this submission because it was not clear how much randomness was in it. We had a practice to re-run the same notebook code multiple times to see what scores we can get. We discussed the solution which was implemented on the last day, trusted it and it worked out. The other interesting fact is that our 2nd selected submission for final evaluation scored <strong>0.497/0.542</strong> that also allows us to take 2nd place. This selected second submission is the same as the 1st one except the “Run reconstruction multiple times” trick, that is described below. Anyway, the difference between the best submission (<strong>0.562</strong>) and the 2nd one is noticable. </p> <h1><strong>Overview</strong></h1> <p>In general, throughout our code submissions every time we fight with the randomness coming from COLMAP responsible for scene reconstruction. Our final solution is based on the use of COLMAP and pretrained SuperPoint/SuperGlue models running on different resolutions for every image in the scene. We apply a bunch of different tricks aimed at different parts of COLMAP-based pipeline in order to stabilize our solution and reach the final score.</p> <h1><strong>Architecture</strong></h1> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5065877%2F68911db7c4cc430dec05670cd196a960%2Fslide_architecture.png?generation=1687202186098466&amp;alt=media" alt="architecture"></p> <h1><strong>Key Takeaways:</strong></h1> <ul> <li>Initially, use all <strong>possible unique image pairs</strong> generated from the scene set. Remove the model and logic used for finding and ranking similar images in the scene. A threshold of 100 defines a minimum number of matches that we expect each image pair to have. If it is less, we discard that pair.</li> <li><strong>SP/SG</strong> settings: unlimited N of keypoints, the keypoint threshold is <strong>0.005</strong>, match threshold <strong>0.2</strong>, and the number of sinkhorn iteratiors is <strong>20</strong>.</li> <li><strong>Half precision</strong> for SP/SG helped to reduce occupied memory without sacrificing noticable accuracy. Another great performance trick is to cache keypoints/descriptors, generated by SP for each image, and, then cache SG matches for every image pair. It allowed to reduce the running time a lot.</li> <li><strong>TTA</strong>. Ensemble of matches extracted from images at different scales. In our local experiments, the best results are achieved with a combination of <strong>[1088, 1280, 1376]</strong>. We used np.concatenate to join matches from different models that was pretty common for the last IMC22 competition.</li> <li>Apply <strong>rotate detector</strong> to un-rotate images in the scene if necessary. We discovered that some scenes in the train dataset (cyprus, dioscuri) have many 90/270 rotated images. Some of the images have EXIF meta information. Unfortunately, looking into the train dataset, we did not find any specifics about the orientation the image was captured at. To address this rotation issue, we re-rotate the image to its natural orientation by employing this <a href="https://github.com/ternaus/check_orientation" target="_blank">solution</a>. We use it w/o any threshold and look at the number of rotations that we need to apply to an image. After applying rotation, the score for cyprus scene jumped up significantly from <strong>~0.02</strong> up to <strong>~0.55</strong>. RotNet implementation did not work out for us.</li> <li><strong>Set-up initial image</strong> for COLMAP reconstruction explicitly. For each image we store the number of pairs in which it seen and the number of matches these pairs produce. Then we pick the one with the highest number of pairs. If multiple images satisfy this criterion, we pick the one with the highest number of matches. It helped to boost the score.</li> <li>To reduce randomness in the score, we decided to do something like <strong>averaging multiple match_exhaustive</strong> calls. The idea is to run match_exhaustive for N times on the original database of matches. Then we take only those matches that that appear in 8/10 cases, other matches are neglected. It was done in a rude way with database copies write/read, etc.</li> <li><strong>Run reconstruction multiple times</strong> from scratch with different matchers threshold e.g. <strong>[100, 125, 75, 100]</strong>. By looking at N of registered images and number of 3D cloud points, we select the best reconstruction. This trick not only allows to find the better reconstruction by finding the better threshold for matches, but also decrease the randomness effect and acts as a countermeasure against a shake up. Due to its running time complexity, we used this strategy only for scenes having less than 40-45 images. This is the last step in our solution that helped us to boost score from <strong>0.497/0.542</strong> to <strong>0.506/0.562</strong>. We also experimented with pycolmap.incremental_mapping employing similar idea but that scenario did not work out.</li> </ul> <h1><strong>Ideas that did not work out or not fully tested:</strong></h1> <p>• <strong>TTA multi crop</strong>, no success. The idea was to split image into multiple crops and extract matches in order to find similar images in the scene and determine the best image pairs.<br> • <strong>Square-sized images.</strong><br> • <strong>Bigger image size</strong> (e.g., 1600) for SP/SG.<br> • <strong>Manual RANSAC</strong> instead of using COLMAP internal implementation. We run experiments by disabling geometric verification, but the score was not good.<br> • <strong>NMS filtering</strong> to reduce number of points by using <a href="https://github.com/BAILOOL/ANMS-Codes" target="_blank">ANMS</a>. <br> • Filter least significant image pairs by number of outliers instead of relying on raw matches. It was quite important to look for certain number of matches for an image pair. Run experiments using SP/SG and Loftr. We got higher mAA with Loftr, but, probably, more effort needed here to make it work properly, not enough time.<br> • <strong>Downscale scene images</strong> before passing them to COLMAP.<br> • <strong>Pixel-Perfect Structure-from-Motion</strong>. It was a promising method to evaluate as we got a good boost locally with <a href="https://github.com/cvg/pixel-perfect-sfm" target="_blank">PixSfm</a>, using a single image size of 1280, and it boosted our score from <strong>0.71727</strong> to <strong>0.76253</strong>. Then, we managed to install this framework successfully and run it in Kaggle environment, but could not beat our best score at that moment. It is a heavyweight framework taking too much RAM, and we could run it only for scenes having at most ~30 images. A bit upset because we spent tons of hours compiling all this stuff.<br> • <strong>Adaptive image sizes</strong>. Say, if the longest image side &gt;= 1536 for most images in the scene, we use higher image resolution for matchers ensemble e.g. [1280, 1408, 1536]. Otherwise, a default one is applied [1280,1088,1376]. Did not have enough time to test this idea. It worked locally for cyprus and wall that have big resolution. One of our last submissions implementing this idea crashed with internal error.<br> • Different <strong>detectors, matchers</strong>. We tested Loftr, QuadreeAttention, AspanFormer, DKM v3, GlueStick (keypoints + lines), Patch2Pix, KeyNetAffNetHardNet, DISK, PatchNetVLAD. We also experimented with the confidence matching thresholds and number of matches, but no boost here. Eventually, SP/SG was the best choice for us. Probably, the reason why many dense-based methods did not work out for us is because of the low performance of the “repeatability” metric and high noise in the matches.<br> • Different <strong>CNNs to find the most similar images</strong> in the scene and generate corresponding image pairs (NetVLAD, different pretrained timm-based backbones, CosPlace etc). We even specifically trained a model to find similar images in the scene, but no success here. Later, we gave up using this strategy at all.<br> • Different <strong>keypoint/matching refinement</strong> methods (e.g., recently published <a href="https://github.com/TencentYoutuResearch/AdaMatcher" target="_blank">AdaMatcher</a>, Patch2Pix ), but did not have enough time. AdaMatcher seems a promising idea to try, a quote from their paper “as a refinement network for SP/SG we observe a noticeable improvement in AUC”</p> <h1><strong>Performance Improvements Step By Step</strong></h1> <table> <thead> <tr> <th>Method</th> <th>Private LB</th> <th>Public LB</th> <th>Δ Private LB</th> </tr> </thead> <tbody> <tr> <td>Baseline</td> <td>0.382</td> <td>0.317</td> <td>–</td> </tr> <tr> <td>Manual RANSAC</td> <td>0.336</td> <td>0.286</td> <td>–0.046</td> </tr> <tr> <td>Image size 1024 → 1280</td> <td>0.407</td> <td>0.345</td> <td>+0.025</td> </tr> <tr> <td>Image size 1376, similarity=None</td> <td>0.489</td> <td>0.426</td> <td>+0.026</td> </tr> <tr> <td>Exhaustive matching, 8/10</td> <td>0.486</td> <td>0.441</td> <td>-0.003</td> </tr> <tr> <td>TTA, Image sizes [840, 1024, 1280]</td> <td>0.491</td> <td>0.447</td> <td>+0.002</td> </tr> <tr> <td>TTA, Image sizes [1088, 1280, 1376]</td> <td>0.523</td> <td>0.475</td> <td>+0.032</td> </tr> <tr> <td>Manual image initialization</td> <td>0.529</td> <td>0.492</td> <td>+0.006</td> </tr> <tr> <td>Rotate detection</td> <td>0.542</td> <td>0.497</td> <td>+0.013</td> </tr> <tr> <td>Multi-run reconstruction, matches thr [100, 125, 75, 100]</td> <td>0.562</td> <td>0.506</td> <td>+0.02</td> </tr> </tbody> </table> <h5>The final score is 0.506/0.562 in Public/Private.</h5> <h1><strong>Local Validation</strong></h1> <p>As a reference, this is one of our latest metric reports using train dataset: </p> <p>urban / kyiv-puppet-theater (26 images, 325 pairs) -&gt; mAA=0.921538, mAA_q=0.991077, mAA_t=0.921846<br> urban -&gt; mAA=0.921538</p> <p>heritage / cyprus (30 images, 435 pairs) -&gt; mAA=0.514713, mAA_q=0.525287, mAA_t=0.543678<br> heritage / wall (43 images, 903 pairs) -&gt; mAA=0.495792, mAA_q=0.875637, mAA_t=0.509302<br> heritage -&gt; mAA=0.505252</p> <p>haiper / bike (15 images, 105 pairs) -&gt; mAA=0.940952, mAA_q=0.999048, mAA_t=0.940952<br> haiper / chairs (16 images, 120 pairs) -&gt; mAA=0.834167, mAA_q=0.863333, mAA_t=0.839167<br> haiper / fountain (23 images, 253 pairs) -&gt; mAA=0.999605, mAA_q=1.000000, mAA_t=0.999605<br> haiper -&gt; mAA=0.924908</p> <p><strong>Final metric -&gt; mAA=0.783900</strong></p> <p>Finally, we had two submissions running on the last day. One of them succeeded and allowed us to get 2nd place, but the other one did not fit the time limit, unexpectedly. Sometimes it is stressful to make a submission on the last day of the competition.</p> <h1><strong>Helpful Resources</strong></h1> <p>Special thanks to the authors of the following projects:<br> • <a href="https://colmap.github.io/" target="_blank">COLMAP</a><br> • <a href="https://ieeexplore.ieee.org/document/7780814" target="_blank">SuperPoint</a><br> • <a href="https://arxiv.org/abs/1911.11763" target="_blank">SuperGlue</a><br> • <a href="https://github.com/ternaus/check_orientation" target="_blank">check_orientation</a></p>
AMP®-Parkinson's Disease Progression Prediction
12th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: AMP®-Parkinson's Disease Progression Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Before anything else, I want to express my deep gratitude to my team members <a href="https://www.kaggle.com/vitalykudelya" target="_blank">@vitalykudelya</a> <a href="https://www.kaggle.com/yukisawamura" target="_blank">@yukisawamura</a> <a href="https://www.kaggle.com/yukisawamura" target="_blank">@yukisawamura</a> <a href="https://www.kaggle.com/salaryman" target="_blank">@salaryman</a> who contributed their time, efforts, and expertise in this competition. Next, we'd like to extend our thanks to the Kaggle community and the hosts. Thank you for providing a challenging dataset and a great learning opportunity.</p> <h1><strong>Overview of our solution</strong></h1> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2848721%2F279f932e866a8efdd2c539776f6436e6%2F2023-05-19%209.56.02.png?generation=1684460061564118&amp;alt=media" alt="Overview"></p> <p>Our solution is based on user grouping, trend calculation for each group, user group prediction (classification), and trend assignment for each group.<br> You can check our solution code <a href="https://www.kaggle.com/maruichi01/11th-place-solution" target="_blank">here</a>. It takes super long time to be scored beacuse my pipeline is not good😓</p> <h2><strong>1. Grouping</strong></h2> <p>During our exploratory data analysis (EDA), we noticed significant differences in the target trends among users based on the presence or absence of Medication information (across all 'visit_month' data, regardless of being 'On' or 'Off'). It is basis of our pipeline.</p> <h2><strong>2. Trend Calculation</strong></h2> <p>Based on the groups' information, we created three types of trends: those without Medication information, those with Medication information, and overall trends. These trends were adopted from the '<a href="https://www.kaggle.com/code/vitalykudelya/only-trends" target="_blank">Only_Trend</a>' and '<a href="https://www.kaggle.com/code/vitalykudelya/p05060-protein-npx-groups-trend-silver-medal" target="_blank">Protein Shift</a>' notebooks. Particularly, the protein 'P05060' greatly contributed to improving the Public LB score.</p> <h2><strong>3. Feature Engineering + Group Classification</strong></h2> <p>We implemented binary classification using LightGBM to predict whether a user belongs to a group with Medication information or not. The prediction was made based on 'visit_month' related information, and Protein and Peptide data.<br> For our final submission, to increase the robustness of the model, we utilized 10-fold cross-validation (CV) and random seed averaging.</p> <h2><strong>4. Using Binary Classification Results for Trend Mapping</strong></h2> <p>Based on the results of the binary classification, we mapped the trends of each group. For our final submission, if the prediction from binary classification was above 0.75, we assumed it belonged to the group with Medication information. If it was below 0.15, we assumed it belonged to the group without Medication information. For all other cases, we used the overall trend.</p> <p>As the dataset for this competition was small, both CV and LB scores were unstable. We struggled until the end to decide which submission to choose. If you have any questions or feedback, please feel free to comment. We look forward to learning with all of you in the community.</p>
Image Matching Challenge 2023
39th place solution - SuperGlue + SIFT
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Image Matching Challenge 2023 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2>Overview</h2> <p>We used SuperGlue or SIFT on different scenes based on a heuristic and you can see our scores below.</p> <p>Notebook: <a href="https://www.kaggle.com/code/gunesevitan/image-matching-challenge-2023-inference" target="_blank">https://www.kaggle.com/code/gunesevitan/image-matching-challenge-2023-inference</a><br> Code: <a href="https://github.com/gunesevitan/image-matching-challenge-2023" target="_blank">https://github.com/gunesevitan/image-matching-challenge-2023</a></p> <h3>Scene Scores</h3> <table> <thead> <tr> <th></th> <th>mAA</th> <th>mAA Rotation</th> <th>mAA Translation</th> </tr> </thead> <tbody> <tr> <td>bike</td> <td>0.9228</td> <td>0.9904</td> <td>0.9228</td> </tr> <tr> <td>chairs</td> <td>0.9775</td> <td>0.9916</td> <td>0.9775</td> </tr> <tr> <td>fountain</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> </tr> <tr> <td>dioscuri</td> <td>0.5062</td> <td>0.5220</td> <td>0.5236</td> </tr> <tr> <td>cyprus</td> <td>0.6523</td> <td>0.7887</td> <td>0.6586</td> </tr> <tr> <td>wall</td> <td>0.8150</td> <td>0.9359</td> <td>0.8317</td> </tr> <tr> <td>kyiv-puppet-theater</td> <td>0.7704</td> <td>0.8756</td> <td>0.7895</td> </tr> </tbody> </table> <h3>Dataset Scores</h3> <table> <thead> <tr> <th></th> <th>mAA</th> <th>mAA Rotation</th> <th>mAA Translation</th> </tr> </thead> <tbody> <tr> <td>haiper</td> <td>0.9667</td> <td>0.9994</td> <td>0.9667</td> </tr> <tr> <td>heritage</td> <td>0.6578</td> <td>0.7489</td> <td>0.6713</td> </tr> <tr> <td>urban</td> <td>0.7704</td> <td>0.8756</td> <td>0.7895</td> </tr> </tbody> </table> <h3>Global Scores</h3> <table> <thead> <tr> <th>mAA</th> <th>mAA Rotation</th> <th>mAA Translation</th> </tr> </thead> <tbody> <tr> <td>0.7983</td> <td>0.8746</td> <td>0.8092</td> </tr> </tbody> </table> <h3>LB Scores</h3> <p>Public LB Score: <strong>0.415</strong><br> Private LB Score: <strong>0.465</strong></p> <h2>SuperPoint &amp; SuperGlue</h2> <p>SuperPoint and SuperGlue models are used with almost default parameters except <code>keypoint_threshold</code> is set to 0.01. We found that SuperGlue works better with raw sizes but some of the scenes had very large images that didn't fit into GPU memory. We resized images to 2560 (maximum longest edge that can be used safely on Kaggle) longest edge if any of the edges exceed that number. Otherwise, raw sizes are used.</p> <h2>SIFT</h2> <p>We initially started with COLMAP's SIFT implementation and it was working pretty good as a baseline. It was performing better on some scenes with very large images and strong rotations compared to deep models. There was a score trade-off between cyprus and wall while enabling <code>estimate_affine_shape</code> and <code>upright</code> and we ended up disabling both of them.</p> <pre><code>sift_extraction_options.max_image_size = sift_extraction_options.max_num_features = sift_extraction_options.estimate_affine_shape = sift_extraction_options.upright = sift_extraction_options.normalization = </code></pre> <h2>Model Selection</h2> <p>We noticed that large images with EXIF metadata has very high memory consumption and those are the images that have 90 degree rotations because of DSLR camera orientation. We add a simple if block that was checking the mean memory consumption of each scene. If that was greater than 16 megabytes, we used SIFT. Otherwise, we used SuperGlue on that scene.</p> <h2>Incremental Mapper</h2> <p>We used COLMAP's incremental mapper for reconstruction with almost default parameters except <code>min_model_size</code> is set to 3. Best reconstruction is selected based on registered image count and unregistered images are filled with scene mean rotation matrix and translation vector.</p> <h2>Thing that didn't work</h2> <ul> <li>OpenCV SIFT (COLMAP's SIFT implementation was working way better for some reason)</li> <li>DSP-SIFT (domain size pooling was boosting my validation score on local but it was throwing an error on Kaggle)</li> <li>LoFTR (too slow)</li> <li>KeyNet, AffNet, HardNet (bad score)</li> <li>DISK (bad score)</li> <li>SiLK (too slow)</li> <li>ASpanFormer (too slow)</li> <li>Rotation Correction (we probably didn't make it correctly within the limited timeframe)</li> <li>Two stage matching (it was boosting my validation score but didn't have enough time to fit it into pipeline)</li> </ul>
Image Matching Challenge 2023
4th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Image Matching Challenge 2023 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>(June 21st: Added additional explanation.)</p> <p>Firstly, I'd like to express my gratitude to the hosts and Kaggle staff for conducting the IMC 2023 competition. The task was both exciting and challenging, which made it a pleasure to engage with over the two months.</p> <h2>Overview</h2> <p>SuperPoint/SuperGlue proved to be exceptionally accurate and quick.<br> My code is partly a combination of the baseline provided by the host and <a href="https://www.kaggle.com/code/chankhavu/loftr-superglue-dkm-with-inspiration" target="_blank">the notebook by </a><a href="https://www.kaggle.com/chankhavu" target="_blank">@chankhavu</a> at IMC 2022. I extend my appreciation to both for providing the codes.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F9249230%2F1036693590b6655d8d48d588bfd69e9c%2FIMC-solution.png?generation=1686658721858306&amp;alt=media" alt=""></p> <h2>Main pipeline</h2> <h3>Screening process based on the number of matches</h3> <p>Considering the large number of image combinations, an effective screening method was necessary. I noticed that the number of matches made by SG is significantly low (&lt;10) when an image pair is unsuitable for stereo matching. Consequently, I decided to bypass the process if the number of matches achieved by SG (longside = 1200) fell below a certain threshold (in this case, 30 matches). This strategy significantly reduced processing time, allowing for more pair trials within the given timeframe, leading to a noticeable improvement (LB: +0.08).</p> <h3>Rotation during the screening process</h3> <p>Procuring meaningful matches from pairs with unsorted image orientations, such as those found in Cyprus, proved to be challenging. Therefore, I incorporated a rotation process into the screening procedure, resulting in further improvement (LB: +0.04).</p> <h3>Image splitting</h3> <p>Each image was divided into four sections, each generating its own set of keypoints, followed by the execution of matchings across all pair combinations (4x4 = 16 pairs) with a batched process for SP/SG (longside = 1400). <br> With image splitting, the number increased to almost 3 times larger than the single one as shown below, <br> actually, which does not depend on the original size of the image.<br> Just increasing the input size of an image cannot achieve this.<br> I obtained similar benefits at the other scenes.<br> This method proved to be more effective and time-efficient than traditional TTA in my case (LB: +0.01~0.02).<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F9249230%2F59ee7b75b96f413775ba3f8f7e6fc6c0%2Fsplit_2.png?generation=1687299587316264&amp;alt=media" alt=""></p> <h3>Ensembling with DKM</h3> <p>After comparing various models, DKM v3 emerged as a relatively lightweight and effective choice when used in conjunction with SG (LB: +0.01~0.04).<br> SuperGlue could not create correct matches at the stair in the image pair below (see the yellow region), while it provided good matches at the other objects such as the arch and pillar.<br> On the other hand, DKM can detect the correct corresponding points of the stair, which suggests that these matchers are complementary to each other.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F9249230%2F4a040a43881adc0532ccc0e5b4878c17%2FDKM_2.png?generation=1687299538792519&amp;alt=media" alt=""></p> <h3>Parallel execution of separate matching and mapping processes</h3> <p>Both matching and mapping/reconstruction are time-intensive tasks. However, the former utilizes both the GPU and a single CPU, while the latter only requires CPU resources. Therefore, implementing parallel processing using the queue library improved time efficiency by approximately 20~30%. This concept was inspired by a gold-prize solution at IMC 2022. <br> Remember to set <code>mapper_options.num_threads = 1</code>, which can also help avoid OOM during reconstruction.</p> <h2>The final score:</h2> <p>urban / kyiv-puppet-theater (26 images, 325 pairs) -&gt; mAA=0.921538, mAA_q=0.987385, mAA_t=0.921538<br> urban -&gt; mAA=0.921538</p> <p>heritage / dioscuri (174 images, 15051 pairs) -&gt; mAA=0.594950, mAA_q=0.689662, mAA_t=0.602279<br> heritage / cyprus (30 images, 435 pairs) -&gt; mAA=0.706437, mAA_q=0.727126, mAA_t=0.724828<br> heritage / wall (43 images, 903 pairs) -&gt; mAA=0.805980, mAA_q=0.935105, mAA_t=0.824917<br> heritage -&gt; mAA=0.702456</p> <p>haiper / bike (15 images, 105 pairs) -&gt; mAA=0.933333, mAA_q=0.999048, mAA_t=0.933333<br> haiper / chairs (16 images, 120 pairs) -&gt; mAA=0.981667, mAA_q=0.999167, mAA_t=0.981667<br> haiper / fountain (23 images, 253 pairs) -&gt; mAA=0.999605, mAA_q=1.000000, mAA_t=0.999605<br> haiper -&gt; mAA=0.971535</p> <p><strong>Final metric -&gt; mAA=0.865176</strong></p> <p><strong>Public LB: 0.471</strong><br> <strong>Private LB: 0.534</strong></p> <p>It should be noted that the submission showed best the highest local and public score, and also resulted in the best private score among my submissions. While I struggled with the randomness of Colmap, I now recognize that the dataset was useful and served as a valuable reference in aiming for the correct goal.</p> <h2>Ideas that did not work well</h2> <ul> <li>Other models such as LoFTR, SE2-LoFTR, disk, QuadTreeAttention, OpenGlue, and SILK were tested. It was revealed that the combination of SP/SG and DKM consistently outperformed them in terms of both speed and performance. (The differences in local mAA scores have been provided in the comment section.)</li> <li>Employing USAC_MAGSAC prior to reconstruction occasionally shortened the reconstruction duration, but the effect was minor with the parallel execution, where the matching process is rate-determining. Also, it never improved the score in my case.</li> <li>Implementing CLAHE rendered my pipeline unstable and less robust. While it proved effective in some scenes, it often deteriorated the accuracy in others, overall often leading to a decrease in score.</li> <li>Other forms of TTA (resolution, flip, 10deg-rotation, crop) provided only minimal improvement while consuming significant time. It appeared more beneficial to experiment with numerous pairs than to utilize TTA.</li> <li>I attempted to determine the R and T for each image that could not be registered with Colmap, employing the same methodology as IMC2022. However, this approach failed to improve the score. The reason seems straightforward: with either method, if matches cannot be identified, there is little that can be done. (On the other hand, adding the R and T values obtained from reconstructions other than best_idx to the submission also slightly improved the score.)</li> </ul>
Vesuvius Challenge - Ink Detection
13th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Vesuvius Challenge - Ink Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hello everyone! Here is a write-up of my best private leaderboard solution (top 13 with 0.641814 score, 0.723345 on public LB). </p> <p>I am very grateful to organizers and participants for this competition, it was very interesting and challenging. I've learned a lot about segmentation and had a lot of fun! Some of the ideas I have used in the solution came from code / discussion tabs on Kaggle, hopefully I have mentioned them all.</p> <p>If you have any questions, please do not hesitate to reach me in comments.</p> <h1>Overview</h1> <p>Data:</p> <ul> <li>Z shift and scale pre-processing based on mean layer intensity curve fitting to fragment 3's mean intensity curve (inspired by <a href="https://www.kaggle.com/code/ajland/eda-a-slice-by-slice-analysis" target="_blank">this</a> notebook)</li> <li>5 fold training (1, 2a, 2b, 2c, 3 fragments, 2nd fragment is split by scroll mask area: top part + 2 bottom parts)</li> <li>weighted sampling from fragments according to scroll mask area</li> <li>patch size is 384, on test overlap is 192 via spline-weighted averaging (inspired by <a href="https://github.com/bnsreenu/python_for_microscopists/blob/master/229_smooth_predictions_by_blending_patches/smooth_tiled_predictions.py" target="_blank">this</a> approach)</li> <li>24 slices with indices in [20, 44), random crop of 18 of them, each 3 slices window is fed to model and 6 resulting logits aggregated via linear layer</li> </ul> <p>Training:</p> <ul> <li>model is SMP Unet with 2D pre-trained encoder + linear layer aggregation</li> <li>random crop inside scroll mask + standard vision augmentations</li> <li>BCE loss with 0.5 weight of positive examples, selection by F0.5 score on validation fold</li> <li>64 epoch training, epoch length is ~ number of patches in train set (it is not constant because of random cropping)</li> <li>LR is 1e-4, no layers freezing / LR decay, schedule is cosine annealing with warmup (1e-1, 1, 1e-2 factors, 10% warmup), optimizer is AdamW</li> </ul> <p>Inference:</p> <ul> <li>TTA: cartesian product of 4 rotations (0, 90, 180, 270) and 3 flips (no flip, horizontal, vertical) -&gt; 12 predictions per volume (brought this idea from <a href="https://www.kaggle.com/code/yoyobar/2-5d-segmentaion-model-with-rotate-tta" target="_blank">here</a>)</li> <li>TTA probabilities are averaged, writen to image file, then 5 models images averaged</li> </ul> <p>Environment &amp; tools:</p> <ul> <li>docker, pytorch lightning CLI, wandb</li> <li>1 x 3090, 1 x 3080Ti mobile</li> </ul> <h1>Some details</h1> <h2>Data</h2> <p>Inspired by <a href="https://www.kaggle.com/code/ajland/eda-a-slice-by-slice-analysis" target="_blank">this</a> analysis, I plot the mean intensity of each layer of each fragment for full image, only for ink and only for backgound.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2190976%2F011cb02384ff04722ab4cdd093d4a888%2Fintencity_curve.png?generation=1686855075215824&amp;alt=media" alt="Mean intensity of each layer of each fragment"></p> <p>Here it could be observed that fragments 1 and 3 are more or less similar in terms of both intencity values and minimum and maximum positions, while fragment 2 is different. </p> <p>My physical intuition behind this is following: papirus seems not being aligned by depth inside 3D volume, moreover papirus thinknes seems to vary between fragments 1, 3 and fragment 2. </p> <p>So, I've decided to align and stretch / compress papirus in z dimension by shift and scale operation:</p> <ol> <li>Split the volume on overlapping patches in H and W dimentions</li> <li>For each patch </li> </ol> <ul> <li>calculate mean intensity of each layer</li> <li>fit <code>z_shift</code> parameter to minimize following loss (<code>y_target</code> here is the intensity curve of full fragment 3)</li> </ul> <pre><code>z_shifted = z + z_shift f = interpolate.interp1d(z_shifted, y, =, =) return f(z_target) - y_target </code></pre> <ol> <li>For full volume fit single <code>z_scale</code> (assuming pairus thickness is same for same fragment) in a similar manner.</li> <li>Linearly upsample resulting <code>z_shift</code> and <code>z_scale</code> maps to full volume size and save them on disk.</li> <li>Apply <code>z_shift</code> and <code>z_scale</code> to full volume via <code>scipy.ndimage.geometric_transform</code> with transform writen as C module for speed.</li> <li>For training save it on disk, for inference apply on the fly.</li> </ol> <p>Resulting <code>z_shift</code> maps and <code>z_scale</code> values are following:</p> <table> <thead> <tr> <th>Fragment 1</th> <th>Fragment 2</th> <th>Fragment 3</th> </tr> </thead> <tbody> <tr> <td><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2190976%2F387734e4ccdf846560e4f61421ffb52c%2F1_z_shift.png?generation=1686854266142975&amp;alt=media" alt="Z shift map and its histogram"></td> <td><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2190976%2F4a66d91e17b7a2332b5574defb0d1b37%2F2_z_shift.png?generation=1686854298329163&amp;alt=media" alt="Z scale map and its histogram"></td> <td><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2190976%2F3510608c738e619389b036e2d1d871ef%2F3_z_shift.png?generation=1686854366312885&amp;alt=media" alt="Z scale map and its histogram"></td> </tr> </tbody> </table> <p>Z scale is ~ 1.0 for fragments 1 and 3 and ~ 0.6 for fragment 2 which could indicate the fragment 2 being ~ twice thicker that others.</p> <p>Such operation should be beneficial for 2D approaches. It seems to improve CV on ~0.03-0.05 F05, but I did not run full comparison due to time constraints.</p> <p>Also, I've tried to fit / calculate normalization for each fragment separately to completely fit the intencity curve, but it did not work well.</p> <h2>Model</h2> <p>Pre-trained maxvit encoder + SMP 2D unet with aggregation has shown the best CV score, so <code>maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k</code> is the one used in best solution. </p> <p>2D unet with aggregation works as follows: unet model is applied on each slice of the input with size = step = 3 along z dimention (total 6 slices), then 6 resulting logits are aggregated via linear layer.</p> <p>Usage of pre-trained models is yielding far better results that random initialization. Unfortunately I did not came up with better architecture allowing to handle 3D inputs and 2D outputs while allowing to use pre-trained models.</p> <p>Best CV scores of ths model and corresponding predictions are following:</p> <table> <thead> <tr> <th>Fragment</th> <th>F05</th> <th>Prediction</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>0.6544</td> <td><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2190976%2Fc9e6f42f6cf07b913a23b6fd989076d0%2F1_3jzUWpdZJR.png?generation=1686854422630598&amp;alt=media" alt="Fragment 1 probabilities"></td> </tr> <tr> <td>2a</td> <td>0.6643</td> <td><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2190976%2Feeef4afaf95c79def86b96014d5a7b5b%2F2a_jP08265cbJ.png?generation=1686854447976196&amp;alt=media" alt="Fragment 2a probabilities"></td> </tr> <tr> <td>2b</td> <td>0.7489</td> <td><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2190976%2Fe70babdbeb33a34a0976195453bf995d%2F2b_whqg8ITAI7.png?generation=1686854469983013&amp;alt=media" alt="Fragment 2b probabilities"></td> </tr> <tr> <td>2c</td> <td>0.6583</td> <td><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2190976%2Fff2531343b00439ebdb81f310894b61b%2F2c_Aomw3d44sk.png?generation=1686854491195589&amp;alt=media" alt="Fragment 2c probabilities"></td> </tr> <tr> <td>3</td> <td>0.7027</td> <td><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2190976%2F626278b3fa70ceb963b517c5069f1d3f%2F3_AM8P9eim4r.png?generation=1686854513349728&amp;alt=media" alt="Fragment 3 probabilities"></td> </tr> </tbody> </table> <p><strong>Tried, but did not work well</strong></p> <p>Multiple backbone models:</p> <ul> <li>convnext v2</li> <li>swin transformer v2</li> <li>caformer</li> <li>eve02</li> <li>maxvit </li> </ul> <p>And multiple full model approaches: full 3D with aggregation on the last layer, 2.5D, 2D with aggregation, custom 3D models with pre-trained weights conversion from 2D to 3D.</p> <h2>Transforms</h2> <p>Following augmentations are used for training</p> <pre><code>train_transform = A.Compose( [ RandomCropVolumeInside2dMask( =self.hparams.img_size, =self.hparams.img_size_z, scale=(0.5, 2.0), ratio=(0.9, 1.1), scale_z=(1 / self.hparams.z_scale_limit, self.hparams.z_scale_limit), =, =0, ), A.Rotate( =0.5, =rotate_limit_degrees_xy, =, ), ResizeVolume( =self.hparams.img_size, =self.hparams.img_size, =self.hparams.img_size_z, =, ), A.HorizontalFlip(=0.5), A.VerticalFlip(=0.5), A.RandomRotate90(=0.5), A.RandomBrightnessContrast(=0.5, =0.1, =0.1), A.OneOf( [ A.GaussNoise(var_limit=[10, 50]), A.GaussianBlur(), A.MotionBlur(), ], =0.4 ), A.GridDistortion(=5, =0.3, =0.5), A.CoarseDropout( =1, =int(self.hparams.img_size * 0.3), =int(self.hparams.img_size * 0.3), =0, =0.5 ), A.Normalize( =MAX_PIXEL_VALUE, =self.train_volume_mean, =self.train_volume_std, =, ), ToTensorV2(), ToCHWD(=), ], ) </code></pre> <p>Here, <code>RandomCropVolumeInside2dMask</code> crops random 3D patch from volume so that its center's (x, y) are inside mask, <code>ResizeVolume</code> is trilinear 3D resize and <code>ToCHWD</code> simply permutes dimentions. <code>img_size</code> = 384, <code>img_size_z</code> = 18, <code>z_scale_limit</code> = 1.33, <code>rotate_limit_degrees_xy</code> = 45, <code>train_volume_mean</code> and <code>train_volume_std</code> are average of imagenet stats.</p> <p>For inference, following transforms are used</p> <pre><code>test_transform = A.Compose( [ CenterCropVolume( =None, =None, =math.ceil(self.hparams.img_size_z * self.hparams.z_scale_limit), =, =, =1.0, ), ResizeVolume( =self.hparams.img_size, =self.hparams.img_size, =self.hparams.img_size_z, =, ), A.Normalize( =MAX_PIXEL_VALUE, =self.train_volume_mean, =self.train_volume_std, =, ), ToTensorV2(), ToCHWD(=), ], ) </code></pre> <p>Here, <code>CenterCropVolume</code> is center crop of 3D volume.</p> <p>Mixing approaches (cutmix, mixup, custom stuff like copy-paste the positive class voxels) also were tried but shown contradicting results, so were not included.</p> <p>Cartesian product of <code>no flip / H flip / V flip</code> and <code>no 90 rotation / 90 rotation / 180 rotation / 270 rotation</code> is used for TTA yielding total 12 predictions.</p> <p><strong>Update</strong>: project source code for training could be found <a href="https://github.com/mkotyushev/scrolls" target="_blank">here</a></p>
AMP®-Parkinson's Disease Progression Prediction
#5: Find the control group
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: AMP®-Parkinson's Disease Progression Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>When I selected my two final submissions a few hours ago, I decided for</p> <ul> <li>one model which uses only the month data for the predictions (no peptides and proteins) and has a public lb score of 54.7.</li> <li>another model, which additionally uses the peptides and proteins, has a better cv but a bad public lb score (55.3).</li> </ul> <p>It turned out that the public leaderboard was the better indicator than the cv, and the peptide/protein feature engineering was useless.</p> <h1>Recognizing the control group</h1> <p>If we plot the median updrs scores for every month, we see that the months which are multiples of 12 (the cyan markers on the gridlines) are usually lower than the non-multiples of 12 (the magenta markers between the gridlines). This cannot be a coincidence.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F7917824%2Fac8539db5efe8bbbc2136e8e8bd6a240%2Fb1.png?generation=1684456007327684&amp;alt=media" alt="b1"></p> <p>A scatterplot of the 248 patients versus the months of their updrs assessments reveals that there are three groups of patients:</p> <pre><code>The patients of the green group had their first visits in months 0, 3, 6, 9, 12. The patients of the orange group had their first visits in months 0, 6, 12, 18, 24 and the last visit in month 60. The patients of the red group had their first visits in months 0, 12, 24. </code></pre> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F7917824%2F6be1bdd4328f7ddfe3cadabc12ea7609%2Fb2.png?generation=1684456024639136&amp;alt=media" alt="b2"></p> <p>If we plot the updrs scores over time of every patient, we see differences among the groups. The red group in particular has the lowest updrs scores, which means that these are the healthiest people, and updrs_4 has rarely been measured for them.</p> <p>We can hypothesize that the red group is the control group (a group of people without Parkinson's disease), and the experimenters decided to test the control group only once a year and to skip the updrs_4 test for this group. The real patients (green and orange groups) were tested more often and with all four updrs tests.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F7917824%2F1fe6d7bdde18dc308ad5ad6b27cab16b%2Fb3.png?generation=1684456038645918&amp;alt=media" alt="b3"></p> <p>Conclusion: We can distinguish the control group from the real patients according to their first non-zero visit_month: If the first non-zero visit_month is &lt;12, we have a real patient; if the first non-zero visit_month equals 12, the person belongs to the healthy control group. This distinction has high predictive value for the updrs scores.</p> <h1>The model</h1> <p>The model has only two features:</p> <ul> <li>the group to which a patient belongs</li> <li>the month of the prediction</li> </ul> <p>Depending on the group, it predicts a linear or isotonic regression:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F7917824%2Fd26b9a640de7558a2cfc229abb21c567%2Fb4.png?generation=1684456323453532&amp;alt=media" alt="b4"></p> <h1>Lessons learned</h1> <ul> <li>A thorough EDA is important, and the EDA must be adapted to the dataset. Automated EDA tools don't find the hidden information.</li> <li>Unusual metrics (smape plus one) require unusual methods.</li> <li>If the training dataset is small, simple models turn out best.</li> <li>Medical data is scarce and expensive. If we haven't been able to prove a connection between proteins and Parkinson symptoms, this doesn't mean there is none. It only means that another thousand patients must be convinced to participate in a five-year study, and we might see a follow-up competition in 2028…</li> <li>In biology and medicine, we usually search for very weak effects: Protein measurements are imprecise and updrs scores depend on the mood of the patient and the doctor. If anybody was expecting SMAPE scores far below 50, this expectation was unrealistic.</li> </ul> <p>Source code is <a href="https://www.kaggle.com/code/ambrosm/pdpp-linear-and-isotonic-groups" target="_blank">here</a>.</p>
Image Matching Challenge 2023
16th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Image Matching Challenge 2023 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First, we would like to say thank you to the organizers and Kaggle staff for setting up this challenge, it has been an amazing experience for us. </p> <p><strong>Our solution</strong><br> For our final submission, we used the an emsemble of SuperPoint, KeyNet/AffNet/HardNet and SOSNet as our feature detectors/descriptors, and used SuperGlue and Adalam for feature matching. We used Colmap for reconstruction and camera localization. We also used <a href="https://github.com/cvg/Hierarchical-Localization" target="_blank">hloc</a> to speed up the pipeline and make it more scalable for validation and testing. </p> <p><strong>Image Retrieval</strong><br> We used <a href="https://openaccess.thecvf.com/content_cvpr_2016/papers/Arandjelovic_NetVLAD_CNN_Architecture_CVPR_2016_paper.pdf" target="_blank">NetVLAD </a>as implemented in <a href="https://github.com/cvg/Hierarchical-Localization" target="_blank">hloc</a> as the global feature descriptor for image retrieval</p> <p><strong>Feature Matching</strong><br> The first thing we noticed in the dataset was that some scenes contain a lot of rotated images, and we tried to tackle this problem with 2 approaches: <br> 1) use rotation invariant feature matchers (e.g., KeyNet/AffNet/HardNet, SOSNet). <br> 2) use a <a href="https://github.com/ternaus/check_orientation" target="_blank">lightweight orientation detector</a> to detect the rotation angles and rotate the image pairs accordingly so that both images have a similar orientation (for simplicity, we only set the rotation angles to 90, 180 and 270 degrees). </p> <p>We proceeded with both approaches and found out that both approaches achieve a similar improvement on the Heritage dataset, however, by ensembling more feature matchers, we observe some extra improvements on Urban and Haiper datasets, so we finally took this approach, and this ensemble achieved the best results for us within the time limit of 9h: <strong>SuperGlue + KeyNet/AffNet/HardNet (with Adalam) + SOSNet (with Adalam)</strong>. Using orientation compensation on the ensembled model does not bring any extra improvements. </p> <p><strong>Things that did not work</strong>:<br> 1) We first tried <a href="https://github.com/pidahbus/deep-image-orientation-angle-detection" target="_blank">this SOTA orientation detector</a>, however it consumes too much memory and could not be integrated into our pipeline on Kaggle<br> 2) We found <strong>KeyNet/AffNet/HardNet + Adalam</strong> in the baseline the <strong>best single feature matcher</strong> without any preprocessing -- We could achieve 0.455/0.433 (equal to 47th place) by only tuning its parameters and resizing the input images to 1600, however, when we integrated them into our pipeline using hloc, its performance dropped significantly to 0.334/0.277 (locally as well, mainly on urban), we tried to investigate but still do not know why. <br> 3) We experimented on a lot of recent feature matchers and ensembles, including DKMv3, DISK, LoFTR, SiLK, DAC, and they either do not perform as well or are too slow when integrated into the pipeline. In general, we found that end-to-end dense matchers not well suited for this multiview challenge despite their success in last year's two view challenge, their speed is too slow and the scores they achieve are also not as good. Here are some local validation results:</p> <ol> <li>SiLK (on ~800x600):<br> urban: 0.125<br> haiper: 0.165</li> <li>DKMv3 (on ~800x600 and it's still very slow):<br> heritage: 0.185<br> haiper: 0.510</li> <li>DISK (on ~1600x1200):<br> urban: 0.461<br> heritage: 0.292 (0.452 with rotation compensation)<br> haiper: 0.433</li> <li>SOSNet with Adalam (on ~1600x1200):<br> urban: 0.031<br> heritage: 0.460 (same with rotation compensation)<br> haiper: 0.653</li> <li>Sift / Rootsift with Adalam (on ~1600x1200):<br> urban: 0.02<br> heritage: 0.396<br> haiper: 0.635</li> <li>DAC: the results are very bad</li> </ol> <p><strong>Reconstruction</strong><br> After merging all the match points from the ensemble, we apply <a href="https://github.com/colmap/pycolmap/blob/743a4ac305183f96d2a4cfce7c7f6418b31b8598/pipeline/match_features.cc#L76" target="_blank">geometric verification</a> in Colmap before reconstructing the model, which speeds up the reconstruction. <br> <strong>Things that did not work</strong>: <br> 1) We tried using Pixel-Perfect SFM, we set it up locally and it gave descent results visually comparable to our pipeline, but since we could not get it up running on Kaggle we did not proceed further. <br> 2) We tried using MAGSAC++ to replace the default RANSAC function Colmap uses to remove bad matching points before reconstructing the model, but we did not see a significant difference in the final scores. </p>
CAFA 5 Protein Function Prediction
Private 2nd/Public 5th solution: Py-Boost and GCN
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CAFA 5 Protein Function Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi everyone!</p> <p>First, thanks to Kaggle and CAFA teams for the competition. The task we were solving was both important and challenging and even though there were some frustrating moments such as low protein amount in the LB and unobvious evaluation code, we enjoyed a lot while solving this problem.</p> <p>Here is a brief overview of public 5th place solution:</p> <h3>1) Sequence embedding</h3> <p>We tried to use the following embedding list: T5, esm2-large, ankh-large. Finally, the most of the models used only T5, but some of them used concat of T5+ESM. In addition, we concatenated one-hot taxon features to embedding. We selected only taxons that are good enough represented in both train and test features, around 30 totally, other taxons where merged into single group.</p> <h3>2) Base models</h3> <h5>Validation</h5> <p>We were not able to create a validation scheme better than a simple 5 fold CV. We had some experiments on the topic but other CV schemes lead to the models with less LB score.</p> <h5>Py-boost</h5> <p>The best performed models on both CV and LB are from Gradient Boosting family. We used my own GBDT implementation called <code>py-boost</code>. I made it few years ago especially to deal with extreme multi-output datasets since it was my main research area at that time. It works on GPU only and it is able to train multi-label tens on even hundreds times faster than popular well known implementations. You can check <code>py-boost</code> on the <a href="https://github.com/sb-ai-lab/Py-Boost" target="_blank">github</a> or read our NeurIPS <a href="https://arxiv.org/pdf/2211.12858.pdf" target="_blank">paper</a> where we explain all the strategies to speed-up multi-output training.</p> <p>Locally I was able to fit 4.5к output (3000/1000/500) <code>py-boost</code> on a single V100 32GB GPU and it takes about 1.5 hour for a single fold.</p> <h5>Logistic Regression</h5> <p>We also trained a simple 13к output Logistic Regression. It shows much less performance than GBDT on the popular terms, but is able to perform on rare outputs.</p> <h5>Neural networks</h5> <p>In the ensemble of models, we used a slightly modified version of the public notebook <a href="https://www.kaggle.com/code/alexandervc/pytorch-keras-etc-3-blend-cafa-metric-etc#Optimizer-%22Sophia%22-sometimes-better-than-Adam" target="_blank">Pytorch,Keras,Etc 3 Blend, CAFA metric, etc</a>. The only difference was the best cross-validated combination of hyperparameters averaged many times.</p> <h3>3) Alternative modelling approach - predicting the conditional probabilities</h3> <p>This approach was discovered at the beginning of the competition thats why it makes some wrong assumptions about the data. But somehow it becomes useful for us for both CV and LB. The main advantage of this approach is utilising the OBO graph on the inference phase and it helps to make a prediction even for terms that were not used in training. Here are the main points:</p> <ul> <li>We assume, that term can exist for the protein only <strong>if at least one of its parents exist</strong>. This is wrong. In real, if term exists, all its parents exist too because of the propagation rules.</li> <li>We reformulate classic multi-label scheme where target matrix of shape (n_protein, n_terms) consists of 0 and 1 to the new scheme. Now targets can be 0, 1 and NaN. Term for the protein will have NaN value in matrix if there is no parent term with value 1. </li> <li>During model training phase NaN cells in the target matrix are masked and ignored</li> <li>Now, our model outputs <strong>the conditional probabilities of term in case of at least one of its parents exist</strong>. On the inference phase we need to transform it back to the raw probabilities</li> <li>Transformation is made in the order defined by graph. When we process the term, all its parents are already processed and have raw probabilities. All terms are included to the scheme, even they are not used in training. For the terms that were not used for training, we used prior mean.</li> <li>While processing the term, we make another wrong assumption, that parents probabilities for the term are independent. But if we assume that, according to [1], we can calculate raw probability for term as<br> <code>p_term_raw = p_term_cond * (1 - (1 - p_parent_0_raw) * (1 - p_parent_1_raw) * ... * (1 - p_parent_N_raw))</code>. <br> Remember that while we processing the term, all its parents already have raw probabilities calculated.</li> </ul> <p>That approach scores better on both CV and LB, but not to much. The main advantage of it is that models are very different from classic multi-label approach and combining it all together boosts our score a lot. We apply this technique for GBDTs and LogRegs, so finally we had 2 GBDTs, 2 LogRegs and single blended NN as the base models</p> <h3>4) Stacking with GCN</h3> <p>We used graph convolution network to aggregate all the predictions. It is trained for the node classification task where each node is a term and each protein is a graph (but all the proteins have the same adjastency matrix). As the node features we used base models predictions together with node embedding trained from scratch. We also added GO annotations features described in the next section.</p> <p>One interesting feature we discovered by just making a mistake. On the inference stage we have wrong model ids but still get a good score, so we made a kind of test time augmentations by averaging the predictions with shuffled models.</p> <p>Another important thing I should mention is about the metric. As far as I understand, we are evaluated only on protein/ontology pairs that are experimentally found. So, if ones we predict a term from ontology that does not exist, we will not get any penalty at all! That means, that we need actually to estimate <strong>the conditional probability of term in case when its ontology exists</strong>. So the correct way to fit a final stacker model for ontology X is to truncate the sample and take only the proteins that contain the terms from ontology X. It gives some small boost to the score and speed-up the computations. </p> <h3>5) GO annotations</h3> <p>We discovered the GO annotations dataset provided by the <a href="http://ftp.ebi.ac.uk/pub/databases/GO/goa/UNIPROT/goa_uniprot_all.gaf.gz" target="_blank">link</a>. They provide not only labelling but also the evidence codes of each term. We separated the codes that Kaggle suppose to be experimental (hope we understood it correctly) from the electronic codes. So we can use electronic labelling as the features to predict experimental given by Kaggle. From our analysis we discovered that about 30% of electronic labels becomes experimental, so using it as model feature performs better than just adding it as is. Experimental labelling was added as is for about 500 proteins we were able to find in this dataset. We also added raw labelling that MT provided if we can not observe it in our dataset. The last step almost didn't change our score, so that sources are almost the same. </p> <h3>6) Postprocessing</h3> <p>We also used the OBO graph to make a post processing. The problem of ML model prediction of protein terms is that it is inconsistent. Following to propagation rule that is applied to the target (I checked it in the CAFA-Evaluator <a href="https://github.com/BioComputingUP/CAFA-evaluator" target="_blank">repo</a>) if term exists, all its parents assumed to exist too. So, consistent model will never predict the probability for parent lower than term probability. But our models don't care about it at all. So we can manually fix this situation. Our final term prediction is <strong>the average of term probability, maximum propagated children probability, and minimum propagated parents probability</strong>. That trick boosts score just a little on the LB but hope it makes model a little bit more stable.</p> <h3>P.S.</h3> <p>We wish a good luck for all the competitors since of course as many of you we expect a big shake up this time. I hope all our work will not be wasted :). Looking forward to see the final results</p> <p><strong>Update</strong> All the solution code is open source now and available at <a href="https://github.com/btbpanda/CAFA5-protein-function-prediction-2nd-place" target="_blank">Github</a></p>
Image Matching Challenge 2023
19th Place Solution - Simple SP+SG via HLoc
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Image Matching Challenge 2023 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Our solution is based on the baseline. In addition, we used the famous open sourced code named “Hierarchical-Localization-master" (<a href="https://github.com/cvg/Hierarchical-Localization" target="_blank">https://github.com/cvg/Hierarchical-Localization</a>) to implement a famous image matching pipeline named "SuperPoint + SuperGlue" without fine tuning but only pretrained weights. So we spent no time on training. We changed parameters such as nms_radius, max_keypoints, resize_max and sinkhorn_iterations to achieve our best leaderboard score.</p>
CAFA 5 Protein Function Prediction
6th Place Solution for the CAFA 5 Protein Function Prediction Challenge
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CAFA 5 Protein Function Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>A. MODEL SUMMARY</p> <p>A1. Background on you/your team<br> Competition Name: CAFA 5 Protein Function Prediction<br> Team Name: Evans<br> Private Leaderboard Score: 0.56076<br> Private Leaderboard Place: 6<br> Name: Quancheng Liu<br> Location: Department of Computational Medicine and Bioinformatics, University of Michigan – Ann Arbor, MI, USA<br> Email: <a href="mailto:[email protected]">[email protected]</a></p> <p>A2. Background on you/your team<br> I am a PhD student currently working under the Freddolino Lab, engaged in developing algorithms for protein function annotation tools, with a specific focus on leveraging deep learning techniques. My participation in the CAFA 5 competition was motivated by the objective to benchmark my method. This competition offered a valuable opportunity to evaluate the efficacy of my approach within a competitive, real-world context.</p> <p>A3. Summary<br> InterlabelGO integrates a sequence alignment-based method with a neural network-based method.<br> In the neural network component, the model leverages the last three layers from the ESM model as input features. The architecture of this model includes several feedforward layers.<br> The sequence alignment-based method utilizes blastp/diamond to query the UniProt-GOA database. Key metrics such as bitscore and sequence identity are used to compute a weighted average score from the search hits.<br> For the neural network models in InterlabelGO, training each model takes approximately 3 minutes on an RTX 4090 GPU</p> <p>A4. Features Selection / Engineering<br> The primary feature utilized in InterlabelGO is the embedding generated from the ESM model. These embeddings are critical as they encapsulate rich, contextual information about protein sequences, which is essential for accurate function annotation.</p> <p>A5. Training Method(s)<br> InterLabelGO used 5-fold random splits of the training data and trained 5 different models for each GO aspect; the final prediction for each GO term was mean score across 5 models.<br> Instead of using BCE loss as is often the case in comparable methods, InterLabelGO applied IA weighted soft f1 loss across both protein level and GO term level. Additionally, InterLabelGO also uses a ranked loss (Figure 1) which is the logsumexp of zero term minus positive terms plus negative term minus zero term. This rank loss was meant to capture the label correlation. During training, these loss functions are combined through simple multiplication, integrating their individual contributions to the model's learning process.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F13769621%2Fe4d50771db7119d7105b2548b8dfd95e%2FPicture1.png?generation=1704901831021262&amp;alt=media" alt=""><br> Finally, the prediction from neural network method is linearly combined with the alignment-based method. The optimal weight was obtained from our validation dataset.</p> <p>A6. Interesting findings<br> One of the most significant discoveries in the development of InterlabelGO was the application of the IA weighted soft F1 loss, which led to a substantial improvement in model performance compared to the traditional BCE loss. This finding is particularly noteworthy as it may point a direction for solving the prevalent issue of label imbalance in protein function prediction.</p> <p>A7. Simple Features and Methods<br> The neural network models, utilizing ESM embeddings, and one model for each aspect, can achieve approximately 90-95% of the full model's performance, offering a simplified yet effective solution.</p> <p>A8. Model Execution Time<br> It takes around 60 minutes to train the model for 3 aspects (5 model pre aspect)<br> It takes less than 10 minutes to generate prediction for around 2000 target proteins for neural network method, while most of time is ESM feature extraction.<br> It takes less than 2 minutes to generate predictions for around 2000 target proteins for alignment-based method using diamond.<br> All above performance was achieved on Ubuntu 23.10 with RTX4090 and i9 13900k.</p> <p>A9. Acknowledgement<br> I would like to extend my sincere thanks to Dr. Chengxin Zhang and Dr. Lydia Freddolino for their insightful discussions.</p> <p>A10. Reference<br> Su, J. et al. (2022) ZLPR: A novel loss for multi-label classification, arXiv.org. Available at: <a href="https://arxiv.org/abs/2208.02955#:~:text=Compared%20to%20other%20rank%2Dbased,the%20label%20powerset%20(LP)" target="_blank">https://arxiv.org/abs/2208.02955#:~:text=Compared%20to%20other%20rank%2Dbased,the%20label%20powerset%20(LP)</a>. </p>
Parkinson's Freezing of Gait Prediction
11th place solution: LSTM-CNN + rolling features
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Parkinson's Freezing of Gait Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Features</h1> <p>I used the following features as model inputs, generated from AccV, AccML, AccAP</p> <ul> <li>Lags</li> <li>Global mean, median, max, average, std and quantiles</li> <li>Rolling mean, median, max, average, std and quantiles (+all of these applied to reversed time series)</li> <li>Rolling mean, median, max, average, std and quantiles of first difference of time series (+all of these applied to reversed time series)</li> <li>Mean number of sign changes over rolling window (+reversed time series)</li> <li>Exponentially weighted mean of first difference of time series</li> <li>Portion of time past</li> </ul> <p>My code for the feature generation:</p> <pre><code> () -&gt; pd.DataFrame: back: rolling = dt[cols][::-].rolling(step, min_periods=) suffix = : rolling = dt[cols].rolling(step, min_periods=) suffix = aggfunc.startswith(): quantile = (aggfunc.split()[]) / ( rolling.quantile(quantile) .add_suffix(suffix)) : ( rolling.agg(aggfunc) .add_suffix(suffix)) (): cols = [, , ] dt = data.copy() defog: data[cols] = data[cols] / dt[] = (defog) verbose: () aggfunc [, , , , ]: dt = dt.join( dt[cols].groupby(dt.assign(dummy=).dummy) .transform(aggfunc).add_suffix() ) step1 = step2 = verbose: () shift [, , -, -]: shift &gt; : suffix_name = fill_data = dt[cols].iloc[: shift] : suffix_name = fill_data = dt[cols].iloc[shift:] dt = dt.join( dt[cols] .shift(shift) .fillna(fill_data) .add_suffix(suffix_name) ) aggfuncs = [ , , , , , , , ] verbose: () aggfunc aggfuncs: dt = dt.join( rolling_agg(dt, step1, aggfunc, cols) ) verbose: () aggfunc aggfuncs: funcname = aggfunc (aggfunc, ) aggfunc.__name__ dt = dt.join( rolling_agg(dt, step2, aggfunc, cols) ) verbose: () aggfunc aggfuncs: dt = dt.join( rolling_agg(dt, step1, aggfunc, cols, back=) ) verbose: () aggfunc aggfuncs: dt = dt.join( rolling_agg(dt[::-], step2, aggfunc, cols, back=) ) verbose: () diff = dt[cols].transform().add_suffix() diff = diff.fillna(diff.iloc[]) cols = [, , ] verbose: () aggfunc aggfuncs: dt = dt.join( rolling_agg(diff, step1, aggfunc, cols) ) verbose: () aggfunc aggfuncs: dt = dt.join( rolling_agg(diff, step2, aggfunc, cols) ) verbose: () aggfunc aggfuncs: dt = dt.join( rolling_agg(diff, step1, aggfunc, cols, back=) ) verbose: () aggfunc aggfuncs: dt = dt.join( rolling_agg(diff, step2, aggfunc, cols, back=) ) verbose: () sign_change = ( diff.apply(np.sign) .transform() .apply(np.) .divide() .fillna() .add_suffix() ) cols = [, , ] aggfuncs = [] verbose: () aggfunc aggfuncs: dt = dt.join( rolling_agg(sign_change, step1, aggfunc, cols) ) verbose: () aggfunc aggfuncs: dt = dt.join( rolling_agg(sign_change, step2, aggfunc, cols) ) verbose: () aggfunc aggfuncs: dt = dt.join( rolling_agg(sign_change, step1, aggfunc, cols, back=) ) verbose: () aggfunc aggfuncs: dt = dt.join( rolling_agg(sign_change, step2, aggfunc, cols, back=) ) verbose: () dt[] = dt.Time.divide(dt.Time.()) dt.drop(, axis=).fillna() </code></pre> <p>After those transformations, I scaled data and divided all time series into parts of length 10000 in separate files of feather format</p> <h1>Model</h1> <p>I used LSTM-CNN model: the input is first fed into three parallel blocks of Conv1D with the different kernel sizes of 3, 5 and 7. Then the input is concatenated with the output of those conv layers and passed to two sequential layers of LSTM. After all - there is a linear layer that does the classification</p> <p>Here's a code for model definition</p> <pre><code> torch.nn nn torch.nn.functional F (): nn.Sequential( nn.Conv1d(, , kernel_size, padding=), nn.ReLU(), nn.Conv1d(, , kernel_size, padding=), nn.ReLU(), ) (nn.Module): (): self.kernels = kernels ().__init__() self.conv_nets = nn.ModuleList([ block(i) i kernels ]) self.lstm = nn.LSTM( * (self.kernels) + , , , batch_first=, bidirectional=, dropout=) self.linear = nn.Linear( * , ) (): conv_res = [] net self.conv_nets: conv_res.append(net(x)) conv_res.append(x) conv_res_tensor = torch.concat(conv_res, axis=) lstm_out, _ = self.lstm(conv_res_tensor.transpose(, )) res = self.linear(lstm_out).transpose(, ) res </code></pre> <p>I did not manage to create a good validation pipeline, so i didn't use any folds. I trained the model for 30 epoch, monitoring loss on validation data (approx. 10% of subjects)</p>
AMP®-Parkinson's Disease Progression Prediction
43rd (top 3%) silver medal solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: AMP®-Parkinson's Disease Progression Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>The following presents my solution, which got 43rd place on the private leaderboard and earned me a silver medal.</p> <p>You can see the code for my solution <a href="https://www.kaggle.com/code/wisawesome/amp-pdpp-top-3-solution-code/notebook" target="_blank">here</a>.</p> <h1>General remarks and sketch of my solution method</h1> <ul> <li><p>I constructed my solution using protein data and <code>visit_month</code> only. I did not use the peptide data at all. I felt that all of the necessary information in the peptide data was summarized in the protein data as every peptide has a 1:1 relationship with a protein. Moreover, because a protein has many peptides, using the peptide data could lead to a situation where a particular protein was detected for 2 different observations using different peptides; thus, using peptide data could have led to a situation where the same information (the detection of a protein) could be obfuscated by seemingly different features (different peptides).</p></li> <li><p>This competition was very difficult because of the very large number of targets which competitors are forced to predict. In particular for all 4 updrs scores, we were asked to predict that score 0, 6, 12, and 24 months ahead. Moreover, we must make these predictions from every possible <code>visit_month</code>. In other words, if, for instance, <code>visit_month == 36</code>, then we must predict all 4 updrs for the 36th month, the 42nd month, the 48th month and the 60th month. To make things even worse, no conclusive list of possible <code>visit_month</code>s was ever provided throughout the competition. I strongly believe that this extraordinary complexity made this competition extremely challenging and interesting.</p></li> <li><p>To tackle the complexity, for every combination of updrs (except for <code>updrs_4</code>), <code>visit_month</code>, and months ahead, I trained a separate CatBoost model. For each of these models, I optimized the hyperparameters using Optuna.</p></li> <li><p>For <code>updrs_4</code> I used a linear model provided by <a href="https://www.kaggle.com/code/ambrosm/pdpp-almost-only-trends" target="_blank">https://www.kaggle.com/code/ambrosm/pdpp-almost-only-trends</a> by <a href="https://www.kaggle.com/ambrosm" target="_blank">Ambrosm</a> which itself was a variation of code originally posted by <a href="https://www.kaggle.com/code/vitalykudelya/only-trends" target="_blank">https://www.kaggle.com/code/vitalykudelya/only-trends</a> by <a href="https://www.kaggle.com/vitalykudelya" target="_blank">Vitaly Kudelya</a>. I used a clip month of 54 for <code>updrs_4</code>.</p></li> <li><p>Moreover, because of the large number of targets, it was possible for some observations in the test set and for some targets the prediction ought to be outside the range of what was seen in training. Because tree-based models extrapolate very poorly, I felt that if the prediction updrs score provided by CatBoost was at the boundary of what was seen in training, then that prediction might be very poor. As such, in cases where CatBoost predicted a score at the boundary of the training data, that score was disregarded and the linear model described in the previous bullet point was used instead.</p></li> </ul> <h1>What worked for me</h1> <ul> <li><p>A combination of CatBoost and the linear model in instances where CatBoost predicts the boundary appeared to work best in GroupKFold cross-validation. Due to the large number of targets CatBoost did a worse job by itself without the linear component.</p></li> <li><p>Setting the objective to MAE for CatBoost appeared to provide better results than MSE.</p></li> </ul> <h1>What did not work for me</h1> <ul> <li>XGBoost</li> <li>LightGBM</li> <li>Random forest</li> <li>SVR</li> <li>Plain vanilla CatBoost without the linear model for <code>updrs_4</code> and for CatBoost predictions outside the bounds of what was seen in training</li> <li>Models based on <code>visit_month</code> alone. I missed the trick all top notebooks used, but absent the trick, I found that including the protein data gave superior results.</li> </ul> <h1>Concluding remarks</h1> <p>I really enjoyed this competition and I hope the organizers will find our solutions useful.<br> Congrats to all winners and good luck to all in future competitions!!!</p>
CAFA 5 Protein Function Prediction
9th place solution summary
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CAFA 5 Protein Function Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to the organizers and Kaggle for an interesting competition, along with a major thanks to the community for many helpful ideas and notebooks throughout the submission period. Also thanks to my amazing teammates <a href="https://www.kaggle.com/aypyaypy" target="_blank">@aypyaypy</a> and <a href="https://www.kaggle.com/ahmedelfazouan" target="_blank">@ahmedelfazouan</a>.</p> <p>Below a short summary of our approach. The approach is an ensemble of two different ideas, that turned out to complement eachother well after merging.</p> <h1>Feed-forward NN</h1> <ol> <li>As input features, we went with: Prot-T5 embeddings, ESM2 embeddings, Taxo features and the (arguably more risky) QuickGO annotations.</li> <li>We predict the most common x_amount of annnotations as a target after propagation. In the final solutions, we ensemble models which predict different amounts of most common annotations, varying from 2000 to 3500 targets with steps of 500. This turned out to result in an important leap in terms of public score.</li> <li><strong>Modeling</strong>: skip layers significantly increased CV and public score. In particular, each dense layer output is forwarded as input to a last layer, which receives a concatenated set of these layers.</li> <li>Basic 10-fold cv scheme with seed ensembling for a slight score boost.</li> </ol> <p>Our best single model reached 0.605.</p> <p>Below a representation of the shift by using different targets + seed ensembling compared to single model. The mixed sub predicts pred=1 more confidently.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F11232701%2F16b9f779c022f3cf68f5d6c7f01c5cbe%2Fmixedtargets.png?generation=1703099790462286&amp;alt=media" alt=""></p> <h2>Propagation</h2> <p>The second part consists of a less ML-based approach. </p> <ol> <li><p>Merge SprofGO and QuickGO using the method introduced by kirill’s public NB (i.e. after taking the average, select the top45 of each aspect; <a href="https://www.kaggle.com/code/kirilldubovik/cafa5-tuning-merge-datasets)" target="_blank">https://www.kaggle.com/code/kirilldubovik/cafa5-tuning-merge-datasets)</a>.</p></li> <li><p>Propagation process. If there is a gene ontology among the children whose probability is expected to be larger than its own, the probability of the parent GO is replaced by the maximum probability of the child GO. This operation is performed in such a way that it propagates from the leaf GO to the root.</p></li> </ol> <p>A visualization of the propagation process result is visualized below. As may be expected, a large amount of probabilities are set to 1. The individual public score of this approach reaches 0.597.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F11232701%2Ff0cbdd2fa1189dad1768f258b4516cc5%2Fpropagation1.png?generation=1703100755063416&amp;alt=media" alt=""></p> <h3>Extra</h3> <p>We have also tried some other approaches, but sadly can not report on their performance due to the private score being calculated for the selected submissions only, which is understandable from a computational perspective.</p> <p>Some of the stuff we tried:</p> <ul> <li>Correct for the overlap in train/test set by including a nested CV with pseudo labels</li> <li>Replace test predictions in the rows that already exist in the training test set by using the OOF values</li> <li>CNNs (did not perform well at all)</li> <li>Any other sort of embeddings reduced model performance</li> </ul>
Parkinson's Freezing of Gait Prediction
6th place solution: spectrograms, wavelets, convnets, unets, and transformers
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Parkinson's Freezing of Gait Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First, thanks to the competition hosts for a meaningful competition with interesting data. Second, congrats to the winners! My team was able to shake up a bit but ended up one place shy of the prize zone. All in all, I'm still happy since we did well to survive the shakeup. I'm also excited to see what top 5 teams did to create a significant gap between us. </p> <p>Our final ensemble is a combination of spectrogram models, wavelet models, and 1D conv models, which scores 0.369/0.462. Below I will discuss them as well as other important technical details. See below for an overview <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3355848%2F12e8af388e67c174b467596a85afebdf%2FFOG_overview.png?generation=1686277343043841&amp;alt=media" alt=""></p> <h1>Validation setup</h1> <p>Validation setup is important due to the noisy data. I ended up with a nested CV setup. The procedure is as follows:</p> <ol> <li>split data into 4 folds stratified by data type and grouped by subject</li> <li>set aside the validation fold (i call this the outer fold)</li> <li>resplit the 3 training folds into 4 folds and do cross validation (I call these inner folds)</li> <li>take last epoch or epoch with best validation score</li> <li>evaluate on outer fold with 4 inner fold models averaged for each outer fold <br> With this setup, we can more accurately simulate a situation where we have 250 sequences in the test set and we avg the fold model predictions. Later on, I switched to training on full inner fold data for 4 times without validation and evaluate with last epoch models on the outer fold set. </li> </ol> <h1>Input features</h1> <p>All of our models use the 3 waves and the pct time feature. In addition, Ahmet uses some metadata features in his models.</p> <h1>Spectrogram Models</h1> <p>When I first saw the data I thought it looked like some sort of waveform, like audio data, so I thought it might work well to use spectrograms to model it. The 3 dimension waves are transformed into 2D spectrograms with STFT. Importantly, transforming the data in spectrograms significantly downscaled the data in the time dimension, so since I use a hop length of 64/50, each frame represents a 0.5 secs window and I'm basically making predictions for 0.5 sec windows. During training, labels are resized with torchvision's resize to fit the size of the time dimension of the spectrograms and during inference the model output is resized back to full dimensionality. Sequences are cut into chunks of 128 secs (256 spectrogram frames) to generate spectrograms. </p> <p>Another important thing with using spectrograms is that if we use a regular type 2D conv model like resnet18, it wouldn't preserve the full dimensionality of the spectrogram (e.g a 256x256 becomes 8x8 after resnet18). In order to circumvent that, I thought to use a UNet to upsample the small feature map after the conv network. Following that, the spectrograms are pooled along the frequency dimension so I have a 1D sequence, which is then inputted into a transformer network before outputting predictions. </p> <p>Best submitted single spectrogram model scores 0.432/0.372. Spectrogram models are good at predicting StartHesitation and Turn but bad at Walking. </p> <h1>Wavelet Models</h1> <p>Wavelets are similar to spectrograms but also different because wavelets have different frequency/time resolutions at different frequencies. Transforming a wave into a wavelet also does not reduce the dimensionality of the scaleogram (I think this is the term for the image you get after wavelet transform). Since there's no downsampling in the time dimension, Unet is no longer needed and I simply use a resnet18/34, which downsample the scaleogram to the same time resolution as spectrogram models after Unet. In turn, I'm also classifying 0.5 sec windows. Similarly, sequences are cut into chunks of 128 secs (256 spectrogram frames) to generate spectrograms.</p> <p>Best submitted single spectrogram model scores 0.386/0.345. Wavelet models are good at predicting Walking but bad at StartHesitation and Turn, so it complements spectrogram models nicely. </p> <h1>Transformer modeling</h1> <p>Just plugging in a transformer actually does not work so well because it overfits, so I bias the transformer self-attention with a float bias mask to condition each prediction on its adjacent time points, which helps the model predict long events well. </p> <p>What the float bias mask looks like (see above) <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3355848%2F3db704367c0eb1a6f7e46f82684e3f31%2FFOG_attention_mask.png?generation=1686277393310664&amp;alt=media" alt=""></p> <pre><code>torch.((L,L)) j: m[i,j]=(-(i-j)/L)** i==j: m[i,i]= m </code></pre> <h1>Data augmentation</h1> <p>I used data augmentation for spectrogram/wavelet models including the following (mostly from audiomentations): </p> <ol> <li>time stretch </li> <li>gaussian noise</li> <li>pitch shift </li> <li>wave scale aug (randomly multiply wave by 0.75~1.5 to get scale invariance)</li> <li>time feature shift aug. </li> </ol> <pre><code> self.augment = Compose([ AddGaussianNoise(=0.001, =0.015, =0.5), TimeStretch(=0.8, =1.25, =0.5,leave_length_unchanged=False,n_fft=n_fft,hop_length=hop_length), PitchShift(=-4, =4, =0.5,n_fft=n_fft,hop_length=hop_length), ]) </code></pre> <pre><code> np.random.uniform()&gt;.: ['wave']*=np.random.uniform(.,.) </code></pre> <pre><code> feature shift aug self and np()&gt;: data=data+np(-,) </code></pre> <h1>Frequency encoding and range</h1> <p>It's important to not use the high frequency bins of fourier transform, so I simply discard them and only keep the first 64 bins, corresponding to 0-15 hz. For spectrogram models, I also encode the frequency bin with torch.linspace(0,15,n_bins) expanded in the time and channel dimension and concatted so the input to the 2D conv network has 4 channels (3 directions of spectrograms + frequency encoding). It was also useful to resample the waves to a lower frequency, which I think reduces the level of noise. I used 32, 64, and 128 hz for spectrogram models and 64 hz for wavelet models. Defog waves are resampled to match the sample rate of tdcsfog waves.</p> <pre><code> self==: data=FA(data,,self.sample_rate) : data=FA(data,,self.sample_rate) </code></pre> <h1>1D conv Models</h1> <p>1D conv models are Ahmet's solution. Please see below for details:</p> <ol> <li>First align defog and tdcs on time axis (downsampled by 32 and 25, but kept their std as a feature)</li> <li>pct_time, total_len, Test are used as independent features. Their prediction is summed with the prediction from the 1D CNN.</li> <li>Because the input was only around 7 seconds long, cumsum features are also fed into 1D CNN.</li> <li>Outlier dominant subject is downweighted.</li> <li>Used snapshot ensembling.</li> <li>Used notype data by applying max on the predictions.</li> </ol> <p>1D conv models are weaker compared to the other 2, scoring 0.373/0.293, but are still a nice addition to the ensemble. Interestingly, 1D conv models and spectrogram models have a similar gap of 0.09 between public and private, whereas wavelet models have only a gap of 0.04. We think this is due to a change in class balance between private/public where public has more start hesitation and private has more walking. </p> <h1>Ensemble Weight Tuning</h1> <p>For our big ensemble, the weights are first hand tuned as a starting point and then I used GP_minimize to maximize CV score. We used 2 weight tuning setups at the end 1. map of 4 folds + map of full data excluding Subject 2d57c2, 2. map of 3 folds excluding fold with Subject 2d57c2 + map of full data excluding Subject 2d57c2. We do this because we consider Subject 2d57c2 to an outlier. </p> <pre><code>results=gp </code></pre> <p>The weights for our models are (we downweight loss of 2d57c2 to 0.2 in some of them)</p> <p>[0.1821, 0.2792, 0.1052] Ahemt model<br> [0.2153, 0.0, 0.0] test257 (32 hz spectrogram)<br> [0.6026, 0.1734, 0.0] test262 (64 hz spectrogram)<br> [0.0, 0.0287, 0.2579] test264 wavelet<br> [0.0, 0.2168, 0.0] test265 (32 hz spectrogram) downweight 2d57c2<br> [0.0, 0.1734, 0.2997] test266 wavelet downweight 2d57c2<br> [0.0, 0.1284, 0.1124] test263 128 hz spec<br> [0.0, 0.0, 0.2248] test271 wavelet double freq scales</p> <p>Let me know if you have questions, and I wouldn't be surprised if I forgot to mention some details . The code is a bit messy atm but i will clean up and release it soon. </p>
Image Matching Challenge 2023
1st Place Solution: Sparse + Dense matching, confidence-based merge, SfM, and then iterative refinement
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Image Matching Challenge 2023 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>0. Introduction</h1> <p>We are delighted to be participating in the image matching challenging 2023. Thanks to the organizers, sponsors, and Kaggle staff for their efforts, and congrats to all the participants. We learn a lot from this competition and other participants.</p> <p>Our team members include Xingyi He, Dongli Tan, Sida Peng, Jiaming Sun, and Prof. Xiaowei Zhou. We are affiliated with the State Key Lab. of CAD&amp;CG, Zhejiang University. I would like to express my gratitude to my teammates for their hard work and dedication. </p> <h1>1. Overview and Motivation</h1> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F14597895%2F2b2b6c045d8a2dfa0a536090f025db02%2Fmain_fig.png?generation=1686841291013288&amp;alt=media" alt="Fig.1"><br> We proposed a coarse-to-fine SfM framework to draw benefits from the recent success of detector-free matchers, while solving the multi-view inconsistency issue of detector-free matchers.<br> Due to the time limitation in the competition, we also incorporate the "light-weight" sparse feature detection and matching methods to determine image rotation and final overlap region between pairs, where the detector-free matcher will be performed upon.</p> <p>However, caused by the multi-view inconsistency of detector-free matchers, directly using matches for SfM will lead to a significant number of 2D and 3D points. It is hard to construct feature tracks, and the incremental mapping phase will be extremely slow.</p> <p>Our coarse-to-fine framework solves this issue by first quantizing matches with a confidence-guided merge approach, improving consistency while sacrificing the matching accuracy. We use the merged matches to reconstruct a coarse SfM model.<br> Then, we refine the coarse SfM model by a novel iterative refinement pipeline, which iterates between an attention-based multi-view matching module to refine feature tracks and a geometry refinement module to improve the reconstruction accuracy.</p> <h1>2. Method</h1> <h2>2.1 Image Pair Construction</h2> <p>For each image, we select k relevant images using image retrieval method. Here we haven't found significant differences among different retrieval methods. This could potentially be attributed to the relatively small number of images or scenes in the evaluation dataset.</p> <h2>2.2 Matching</h2> <h3>2.2.1 Rotation Detection</h3> <p>There are some scenes within the competition datasets which contain rotated images. Since many popular learning-based matching methods can not handle this case effectively, Our approach, similar to that of many other participants, involves rotating one of the query images several times[0, π/2, π, 3π/2] and matching it with the target image, respectively. This helps to mitigate the drastic reduction in the number of matching points caused by image rotations.</p> <h3>2.2.2 Overlap Detection</h3> <p>Like last year's solution, estimating the overlap region is a commonly employed technique. We use the first round of matching to obtain the overlap region and then perform the second round of matching within them. According to the area ratio, we resize the smaller region in one image and align it with the larger region. We find a sparse matcher is capable of balancing efficiency and effectiveness.</p> <h3>2.2.3 Matching</h3> <p>We find the ensemble of multiple methods tends to outperform any individual method. Due to time constraints, we choose the combination of one sparse method (SPSG) and one dense method (LoFTR). We also find that substitute LoFTR by DKMv3 performs better in this competition.</p> <h2>2.3 Multi-view inconsistency problem</h2> <p> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F14597895%2Fab3bc20584b40131a4fe28b55d7d1530%2Fnon_repeatable_problem.png?generation=1686841396817545&amp;alt=media"> </p> <p>As shown in Fig.2, the resulting feature locations of detector-free matchers (e.g., LoFTR) in an image depend on the other image. This pair-dependent nature leads to fragmentary feature tracks when running pair-wise matching over multiple views, which makes detector-free matchers not directly applicable to existing SfM systems (e.g., COLMAP).<br> Moreover, as for the sparse detection and matching part, since the cropped image overlap regions are also relevant to the other image, re-detecting keypoints on the cropped images for matching also shares the same multi-view inconsistency issue.<br> This issue is solved by the following coarse-to-fine SfM framework.</p> <h2>2.4 Coarse SfM</h2> <p>In this phase, we first strive for consistency by merging to reconstruct an initial coarse SfM model, which will be further refined for higher pose accuracy in the refinement phase.</p> <h3>2.4.1 Confidence-guided Merge</h3> <p>After the matching, we merge matches on each image based on confidence to improve the consistency (repeatability) of matches for SfM. For each image, we first aggregate all its matches with other images and then perform NMS with a window size of 5 to merge matches into points with the local highest confidence, as depicted in Fig.1(2). After the NMS, the number of 2D points can be significantly reduced, and the top 10000 points are selected for each image by sorting the confidence if the total point is still larger than the threshold.</p> <h3>2.4.2 Mapping</h3> <p>Based on the merged matches, we perform the coarse SfM by COLMAP. Note that the geometry verification is skipped since RANSAC is performed in the matching phase. For the reconstruction of the scene with a large number of images (~250 in this competition), we enable the parallelized bundle adjustment (PBA) in COLMAP. Specifically, since PBA uses a PCG solver, which is an inexact solution to the BA problem and unlike the exact solution of Levenberg-Marquardt (LM) solver used by default in Ceres, we enable the PBA only after a large number of images are registered (i.e., &gt;40). This is based on the intuition that the beginning of reconstruction is of critical importance, and the inexact solution of PBA may lead to a poor initialization of the scene.</p> <h2>2.5 Iterative Refinement</h2> <p>We proceed to refine the initial SfM model to obtain improved camera poses and point clouds. To this end, we propose an iterative refinement pipeline. Within each iteration, we first enhance the accuracy of feature tracks with a transformer-based multi-view refinement matching module.<br> These refined feature tracks are then fed into a geometry refinement phase which optimizes camera poses and point clouds jointly. The geometry refinement iterates between the geometric-BA and track topology adjustment (including complete tracks, merge tracks, and filter observations). The refinement process can be performed multiple times for higher accuracy.<br> Our feature track refinement matching module is trained on the MegaDepth, and more details are in our paper which is soon available on arXiv.</p> <table> <thead> <tr> <th>Method</th> <th>score(private)</th> </tr> </thead> <tbody> <tr> <td>spsg</td> <td>0.482</td> </tr> <tr> <td>spsg+LoFTR</td> <td>0.526</td> </tr> <tr> <td>spsg+LoFTR+refine</td> <td>0.570</td> </tr> <tr> <td>spsg+DKM^+refine</td> <td>0.594</td> </tr> </tbody> </table> <p>^only replace LoFTR in the Haiper dataset</p> <h1>3. Ideas tried but not worked</h1> <h2>3.1 Other retrieval modules</h2> <p>Other than NetVLad, we have also tried the Cosplace, as well as using SIFT+NN as a lightweight detector and matcher for retrieval. However, there is no noticeable improvement, even performs slightly worse than NetVLad in our framework. We think this may be because the pair construction is at the very beginning of the overall pipeline, and our framework is pretty robust to the image pair variance.</p> <h2>3.2 Other sparse detectors and matchers</h2> <p>Other than Superpoint + Superglue, we have also tried Silk + NN, which performs worse than Superpoint + Superglue. I think it may be because we did not successfully tune it to work in our framework.</p> <h2>3.3 Other detector-free matchers</h2> <p>Other than LoFTR, we also tried Matchformer and AspanFormer in our framework. We find Matcherform performs on par with LoFTR but slower, which will lead to running out of time. AspanFormer performs worse than LoFTR when used in our framework in this challenge.</p> <h2>3.4 Visual localization</h2> <p>We observe that there may image not successfully registered during mapping. Our idea is to "focus" on these images and regard them as a visual localization problem by trying to register them into the existing SfM model. We use a specifically trained version of LoFTR for localization, which can bring ~3% improvement on the provided training dataset. However, we did not have a spare running time quota in submission and, therefore, did not successfully evaluate visual localization in the final submission.</p> <h1>4. Some insights</h1> <h2>4.1 About the randomness</h2> <p>We observe that the ransac performed with matching, the ransac PnP during mapping, and the bundle adjustment multi-threading in COLMAP may contain randomness.<br> After a careful evaluation, we find the ransac randomness seed in both matching and mapping is fixed. The randomness can be dispelled by setting the number of threads to 1 in COLMAP.<br> Therefore, our submission can achieve exactly the same results after multiple rerunning, which helps us to evaluate the performance of our framework.</p> <h2>4.2 About the workload of the evaluation machine</h2> <p>Given that the randomness problem of our framework is fixed, we observe that the submission during the last week before the DDL is slower than (~20min) the previous submission with the same configuration.<br> Our final submission before the DDL using the DKM as a detector-free matcher has run out of time, which we believe may bring improvements, and we decided to choose it as one of our final submissions.<br> We rerun this submission version after the DDL, and it can be successfully finished within the time limit, which achieves 59.4 finally.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F14597895%2F93c7812782a9455aaeafabdffec752b9%2Ffinal_shot_2.png?generation=1686842141320946&amp;alt=media" alt=""></p> <h1>5. Acknowledgment</h1> <p>The members of our team have participated in the IMC for three consecutive years(IMC 2021, 2022, and 2023), and we are glad to see there are more and more participants in this competition, and the number of submissions achieves a new high this year. We really enjoyed the competition this year since one of the most applications of feature matching is SfM. The organizers remove the limitation of the only matching submission as in IMC2021 but limit the running time and computation resources (a machine with only 2 CPU cores and 1 GPU is provided), which makes the competition more interesting, challenging, and flexible. Thanks to the organizers, sponsors, and Kaggle staff again!</p> <h1>6. Suggestions</h1> <p>We also have some suggestions that we notice the scenes in this year's competition are mainly outdoor datasets. We think more types of scenes, such as indoor and object-level scenes with severe texture-poor regions, can be added to the competition in the future. In our recent research, we also collected a texture-poor SfM dataset which is object-centric with ground-truth annotations. We think it may be helpful for the future IMC competition, and we are glad to share it with the organizers if needed.</p> <p>Special thanks to the authors of the following open-source software and papers: COLMAP, SuperPoint, SuperGlue, LoFTR, DKM, HLoc, pycolmap, Cosplace, NetVlad.</p>
Parkinson's Freezing of Gait Prediction
From silver 29th place to 366th nowhere, good shakeup. Final 1D-CNN solution.
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Parkinson's Freezing of Gait Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h4><a href="https://www.kaggle.com/code/ernestglukhov/practicum-final-1d-cnn-solution" target="_blank">Final 1D-CNN solution</a></h4> <p>In this Kaggle notebook, we present a comprehensive approach aimed at predicting Freezing of Gait (FoG) using an ensemble of 1D-CNN models and a sophisticated segmentation process. The segmentation process helps capture distinct patterns in the acceleration data, while the ensemble strategy involves training multiple 1D-CNN models using K-fold cross-validation. Our model architecture consists of multiple Convolutional Blocks, each containing convolutional layers with different kernel sizes and dilations. The models in the ensemble are trained separately on the "tdcsfog" and "defog" datasets. By treating the tdcsfog and defog datasets separately, we aimed to optimize the models' performance for each specific type of data.</p>
Google - Isolated Sign Language Recognition
44th Place Silver - How To Improve Best Public Notebook
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google - Isolated Sign Language Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you Kaggle, Kagglers, PopSign, and Partners for a fun competition! Public notebooks in this competition are very strong! The host said that they need <code>LB &gt;= 0.6</code> to be helpful and the best public notebook scores <code>LB = 0.73</code>! Great job Kagglers!</p> <p>I joined this competition one week ago, so I didn't have much time to build my own model. Instead, i read the discussions and public notebooks, and attempted to improve the best public notebook. I boosted the best public notebook from public <code>LB = 0.73</code> to <code>LB = 0.77</code> and achieved 44th place Silver.</p> <h1>Praise for Best Public Notebooks</h1> <p>I am very impressed with the shared public notebooks</p> <ul> <li>Great Transformer model architecture</li> <li>Selected which 66 Landmarks out of 543 Landmarks are important (hands, lips, pose)</li> <li>Great preprocessing to convert videos of variable length to small fixed length</li> <li>Accurate local validation scheme which estimates LB score</li> </ul> <h1>Improvements to Best Public Notebook</h1> <p>After reading the best public notebook, I made the following 11 changes to boost CV and LB <code>+0.03</code> from <code>LB = 0.73</code> to <code>LB = 0.76</code>:</p> <ul> <li>Train 1 model =&gt; Train 4 models</li> <li>Add Time Scale augmentation</li> <li>Ensemble and apply TFLite FP16 quantization</li> <li>Change the following parameters:</li> <li>INPUT_SIZE, 64 =&gt; 12</li> <li>BATCH_ALL_SIGNS_N, 4 =&gt; 1</li> <li>N_EPOCHS, 250 =&gt; 120</li> <li>LANDMARK_UNITS, 384 =&gt; 224</li> <li>UNITS, 512 =&gt; 376</li> <li>NUM_BLOCKS, 2 =&gt; 3</li> <li>MLP_RATIO, 4 =&gt; 3</li> <li>MLP_DROPOUT_RATIO, 0.40 =&gt; 0.30</li> <li>remove random frame masking</li> </ul> <h1>Published Code</h1> <p>I published my code <a href="https://www.kaggle.com/code/cdeotte/improve-best-public-notebook-lb-0-76" target="_blank">here</a> for those curious to learn "How To Improve Best Public Notebook" from <code>LB = 0.73</code> to <code>LB = 0.76</code></p> <h1>My Thought Process</h1> <p>Most Kagglers are probably curious why and how I discovered these modifications. So let me share my thinking. First, the easiest way to boost CV LB for NN is to ensemble NN with multiple copies of itself (trained with different seeds). So the easiest way to boost public notebook CV LB is to make the model smaller and make the model faster (because this competition has size and time constraints). Then we can include multiple copies of the NN during inference. And each NN we will train with 100% train data.</p> <p>The first thing i did was reduce <code>LANDMARK_UNITS</code>, <code>UNITS</code>, and <code>MLP_RATIO</code> to reduce the model size. I slowly reduced these parameters to see how small I could go without reducing CV score. It is important to reduce the model to less than 5M (i.e. 5 million) parameters because then we can infer 4 copies of the model within the 40MB size restriction when using FP16. With 40MB size limit, the maximum number of parameters during inference with FP16 is 20M. I also tried less than 4M parameters and 3.3M parameters which allows 5 copies and 6 copies respectively (when using FP16 quantization) </p> <p>Second, I made the transformer deeper with more transformer blocks. I increased blocks from 2 to 3 which boosted the CV and LB about <code>+0.01</code> or <code>'0.02</code>. Reducing the <code>MLP_RATIO</code> from 4 to 3 significantly reduced the number of model parameters without affecting CV LB so this gave me extra parameters to allow increasing the transformer blocks.</p> <p>Next, I reduced <code>INPUT_SIZE</code> to speed up the model to make it infer (and train) faster. The parameter <code>INPUT_SIZE</code> is the transformer sequence length. This competition has a inference time constraint of 1 hour (in addition to a 40MB size constraint). Reducing <code>INPUT_SIZE</code> will decrease inference time (and does not affect model size). With sequence length 12, we can easily infer 5+ models under 1 hour.</p> <p>At this point, i got lucky. Reducing <code>INPUT_SIZE</code> actually boosted the CV and LB by <code>+0.01</code> or <code>+0.02</code> or so. After this I tuned the learning rate, batch size, and learning schedule and found <code>N_EPOCHS = 120</code> and <code>BATCH_ALL_SIGNS_N = 1</code> to be best. Many Kagglers overlook the fact that changing batch size can make a big difference for models. We should always try 0.25x, 0.5x, 2x, 4x batch size and change the learning rate for those experiments to be 0.25x, 0.5x, 2x, 4x respectively. Experiments demonstrate that <code>Batch Size = 256</code> is better than the original <code>Batch Size = 1024</code>. This helped <code>+0.0005</code> or so.</p> <p>Lastly NN always benefit from more data and data augmentation. I tried external data and many different data augmentation. Using external data did not help me. Regarding augmentation i only got benefit from <code>frame drop out</code> and <code>time scale augmentation</code>. This helped about <code>+0.0005</code> or so. I also tried rotation, scale, shift but this did not help. Flip was not needed since preprocessing normalized all videos to be left handed.</p>
Image Matching Challenge 2023
46th solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Image Matching Challenge 2023 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Our method consists of three simple parts: keypoint matching, structure from motion, and post-processing. I will briefly explain each of them with a focus on the differences from the baseline.</p> <h2>Keypoint Detect and Matching</h2> <p>We adopted a method that performs keypoint extraction and matching separately, rather than an end-to-end matching method that can share 3D model points across many images. Ultimately, we only used KeyNetAffNetHardNet, but if we had more time, we would have liked to ensemble it with SuperPoint-based methods.<br> By making simple changes listed below, we can improve the score.</p> <ul> <li>increasing the number of keypoint (2048→2048*4).</li> <li>extracting keypoints from different resized images.</li> <li>using algorithm adalam and Orinet written in the codes.</li> <li>using both the Fundamental matrix and the Homography matrix and merging the two results<br> for narrowing down the matching based on geometric characteristics.</li> <li>setting an upper limit on the number of feature point matches per pair of images for feature point matching to reduce the computational complexity of 3D reconstruction.</li> </ul> <h2>Structure from Motion</h2> <p>We tried multiple minimum matching numbers and used the model that estimated the largest number of images that could be estimated as the final estimation result. However, if the proportion of images that could be estimated exceeded the threshold when trying the final matching number from the largest one, we did not try any minimum matching below it.</p> <h2>Post-Processing</h2> <p>Images that could not be estimated by colmap were estimated using cv2.solvePnPRansac.</p>
Vesuvius Challenge - Ink Detection
7th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Vesuvius Challenge - Ink Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>We are very grateful for the organization of this very interesting competition. Sincere thanks to the hosts and the Kaggle team. A big thank you to Yannick and Max, it was lots of fun working with you! For all of us it was the first Kaggle competition. </p> <h2>Summary</h2> <p>In this competition submission, we used the <a href="https://github.com/MIC-DKFZ/nnUNet" target="_blank">nnU-Net framework</a>, a recognized powerhouse in the medical imaging domain. The success of nnU-Net is validated by its adoption in winning solutions of 9 out of 10 challenges at <a href="https://arxiv.org/abs/2101.00232" target="_blank">MICCAI 2020</a>, 5 out of 7 in MICCAI 2021 and the first place in the <a href="https://amos22.grand-challenge.org/final-ranking/" target="_blank">AMOS 2022</a> challange.</p> <p>To tackle the three-dimensional nature of the given task, we designed a custom network architecture: a 3D Encoder 2D Decoder U-Net model using Squeeze-and-Excitation (SE) blocks within the skip connections. We used fragment-wise normalization and selected 32 slices for training, enabling us to extend the patch size to 32x512x512. We divided each fragment into 25 pieces and trained five folds. We only submitted models that performed well on our validation data. For the final submission, we ensambled the weights of two folds (zero and two) from two respective models. One model was trained with a batch size of 2 and a weight decay of 3e-5, while the other was trained with a batch size of 4 and a weight decay of 1e-4. During test time augmentation, we implemented mirroring along all axes and in-plane rotation of 90°, resulting in a total of eight separate predictions per model for each patch. </p> <p>For the two submissions we chose two different postprcessing techniques. The first approach involved setting the threshold of the softmax outputs from the network from 0.5 to 0.6. As a second step we conducted a connected component analysis to eliminate all instances with a softmax 95th percentile value below 0.8. The second approach involved utilizing an off-the-shelf 2D U-Net model with a patch size of 2048x2048 on the softmax outputs of the first model. The output was resized to 1024x1024 for inference and then scaled up to 2048x2048. The intention behind this step was to capture more structural elements, such as the shape of letters, due to the higher resolution of the input. We regret both since they only improved results for public testset.</p> <h2>Model Architecture</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6289140%2Fbf19626f416be872fbb0b56b92a6c585%2FModel.png?generation=1686847814703959&amp;alt=media" alt=""></p> <p>As been mentioned, we chose a 3D Encoder 2D Decoder U-Net model using SE blocks within the skip connections. Therefore, the selected slices were still seen as a 3D input volume by our network. After passing through the encoder, the features were mapped to 2D on all levels (i.e., skip connections) using a specific weighting. One unique aspect of the network to highlight is that the encoder contained four convolutions in each stage to process the difficult 3D input, whereas the decoder only had two convolutional blocks.</p> <p>The mapping was initially performed using a simple average operation but was later refined with the use of Squeeze-and-Excitation. However, instead of applying the SE on the channel dimension — as is usually done to highlight important channels — we applied one SE Block per level (i.e., skip) to all channels, but on the x-dimension. This results in a weighting of the slices in feature space, so when aggregating with the average operation later, each slice has a different contribution.</p> <h2>Preprocessing</h2> <p>In the preprocessing stage, we cropped each fragment into 25 parts and ensured they contain an equal amount of data points (area labeled as foreground in the mask.png). This process was performed to create five folds for training.</p> <p>For the selection of the 32 slices, we calculated the intensity distributions for each individual fragment. From these distributions, we determined the minimum and maximum values and calculated the middle point between them. We then cropped 32 slices around this chosen central slice. The following plot shows the intensity distribution for the individual fragments. The vertical line in the plot represents the midpoint between the maximum and minimum intensity values.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6289140%2F1fd4a8e4dbccd46f59d3ca92b19fb74d%2Fintesity_dist.png?generation=1686847846933995&amp;alt=media" alt=""></p> <p>To further preprocess the data, we applied z-scoring to normalize the intensity values. Additionally, we clipped the intensity values at the 0.5 and 99.5 percentiles to remove extreme outliers. Finally, we performed normalization on each individual fragment to ensure consistency across the dataset.</p> <h2>Training</h2> <p>Here are some insights into our training pipeline.</p> <h3>Augmentation</h3> <p>For the most part, we utilized the out-of-the-box augmentation techniques provided by nnU-Net, a framework specifically designed for medical image segmentation. These techniques formed the foundation of our data augmentation pipeline. However, we made certain modifications and additions to tailor the augmentation process to our specific task and data characteristics:</p> <ul> <li>Rotation: We performed rotations only in the plane, meaning we applied rotations along the y and z axes. Out-of-plane rotations were considered as a measure to ensure stability but were not implemented.</li> <li>Scaling: We introduced scaling augmentation, allowing the data to be randomly scaled within a certain range. This helped to increase the diversity of object sizes in the training data.</li> <li>Gaussian Noise: We added Gaussian noise to the data, which helps to simulate realistic variations in image acquisition and improve the model's ability to handle noise.</li> <li>Gaussian Blur: We applied Gaussian blur to the data, with varying levels of blurring intensity. This transformation aimed to capture the variations in image quality that can occur in different imaging settings.</li> <li>Brightness and Contrast: We incorporated brightness and contrast augmentation to simulate variations in lighting conditions.</li> <li>Simulate Low Resolution: We introduced a transformation to simulate low-resolution imaging by randomly zooming the data within a specified range. This augmentation aimed to make the model more robust to lower resolution images.</li> <li>Gamma Transformation: We applied gamma transformations to the data, which adjusted the pixel intensities to enhance or reduce image contrast. This augmentation technique helps the model adapt to different contrast levels in input images.</li> <li>Mirror Transform: We employed mirroring along specified axes to introduce further variations in object orientations and appearances.</li> </ul> <h3>Training Curves for Submission Folds</h3> <p>Batch size of 4 and a weight decay of 1e-4. Fold 0.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6289140%2Fb0086c8f134adc9b58a78097cec48151%2Fprogress_0_0.png?generation=1686847908240065&amp;alt=media" alt=""><br> Batch size of 4 and a weight decay of 1e-4. Fold 2.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6289140%2F5afa0a5b3cc9890b40155af6265c9617%2Fprogress_0_2.png?generation=1686847917423689&amp;alt=media" alt=""><br> Batch size of 2 and a weight decay of 3e-5. Fold 0.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6289140%2Facd870b03c91b9f4f6875b9d01a7858f%2Fprogress_1_0.png?generation=1686847926934043&amp;alt=media" alt=""><br> Batch size of 2 and a weight decay of 3e-5. Fold 2.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6289140%2F7cd38547bc3d2d76aa0050889c09ed2a%2Fprogress_1_2.png?generation=1686847935394605&amp;alt=media" alt=""></p> <h2>Inference</h2> <p>We made the decision to submit only the ensemble of fold 0 and fold 2 models due to their superior performance on the validation data compared to the other folds. The difference in Dice score between these two folds and the rest of the folds was substantial, with a margin of ~0.1. As a result, we deemed the ensemble of fold 0 and fold 2 models to be the most reliable and effective for our final submission (we fools).</p> <p>Additionally, we made the decision to incorporate test time augmentation (TTA) techniques during the inference process. TTA involved applying mirroring along all axes and in-plane rotation of 90° to each patch. By performing these augmentations, we generated a total of eight separate predictions per model for each patch.</p> <h3>Post Processing</h3> <p>In a moment of desperate determination to achieve better results on the public test set, one audacious team member decided to dive headfirst into the realm of advanced post-processing. This daring soul concocted a daring plan: raise the threshold of the softmax outputs from a mundane 0.5 to a daring 0.6. But that was just the beginning!</p> <p>Undeterred by caution, the same intrepid individual embarked on a quest to conduct a connected component analysis, mercilessly discarding all instances with a lowly softmax 95th percentile value below the illustrious threshold of 0.8.</p> <p>With fervor and a touch of madness, this brave adventurer tested countless combinations of thresholds, determined to find the golden ticket to enhanced validation scores across all folds. A relentless pursuit of validation improvement that knew no bounds.</p> <p>On the public test set, this fearless undertaking delivered a substantial boost of 0.05 dice points, raising hopes and spirits across the team. The unexpected improvement injected a renewed sense of excitement and optimism.</p> <p>However, as fate would have it, on the ultimate battlefield of the 50% final, the outcome took a peculiar twist. The gains dwindled ever so slightly, with a meager decrease of -0.002 dice points. Though the difference may seem minuscule, in the realm of fierce competition, every decimal point counts.</p> <p>Sorry guys.</p> <h3>2D Unet Refinement</h3> <p>The second approach involved employing a 2D U-Net model with a patch size of 2048x2048 on the softmax outputs generated by the first model. Subsequently, the model's output was resized to 1024x1024 for inference purposes and then scaled up to the original resolution of 2048x2048. The rationale behind this strategy was to leverage the higher resolution input data to capture finer structural details, including the intricate shapes of letters. The training data for our this model was derived from inferences made by our various trained models on the original training data.</p> <p>Whose idea was this?</p> <p>Sorry again.</p> <h2>Preliminary Last Words</h2> <p>More details and code will follow, cheers!</p>
Vesuvius Challenge - Ink Detection
73th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Vesuvius Challenge - Ink Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2>Acknowledgement:</h2> <p>Firstly, I would like to express my gratitude to the organizers of the competition and Kaggle. Next, I want to thank <a href="https://www.kaggle.com/yoyobar" target="_blank">@yoyobar</a>, I started the competition based on his/her open-source NoteBooks, thank you!🎉. This competition is my first and I am glad that my ranking is in the top 6%, which is a milestone for me. This achievement cannot be separated from my enthusiastic senior brothers, and I am grateful for their valuable suggestions!</p> <h2>1. Overview:👀</h2> <ul> <li>Ensemble of 8 Unet models with mit, pvt as backbones</li> <li>TTA in four directions(0°, 90°, 180°, 270°) in the inference stage</li> </ul> <h2>2. DataFlow:👇</h2> <ul> <li>Select channels 28-34 (11 channels), split them into groups of [28-30, 30-32, 32-34] and concatenate them together in batch_ Size dimension</li> <li>Split fragment2 into two sub fragmentA and B, thus we have four parts of the data, we can achieve 4fold cross validation</li> </ul> <h2>3. Data Augmentation:</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F11574777%2F80b2b3e3d5d6a89f77c70a14eb5a75d3%2F1686900312738.jpg?generation=1686900353043727&amp;alt=media" alt=""></p> <h2>4. Model Selection</h2> <p>I initially tried using Segformer, but the results of this model were not good at that time, so I did not use Segformer in subsequent experiments.Now it seems like I did something wrong.<br> I have tried many different encoder models, such as resnet, resnext, seresnext, efficientnet, convnext, mit, and pvt, but <strong>only mit and pvt have shown superior performance</strong>, so all subsequent experiments are based on these two models.Meanwhile, it should be noted that overly complex and deep networks may lead to overfitting of the model on training data, so I chose <strong>mit-b2 and pvt-v2-b2</strong> as the encoder models.</p> <h2>5. Improvement of Network Structure</h2> <ul> <li>All shape feature maps output by the encoder will go through an additional SK attention module</li> <li>Then perform the following operations on the feature map that has undergone SK attention, as referenced by <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> .<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F11574777%2Fdf703992b4d39cfa8c5758910e4ddceb%2F1686902361015.png?generation=1686902393516010&amp;alt=media" alt=""><br> The structure of self.weight1 is as follows:<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F11574777%2Fccf94645bbad0c6de308f39056f3293a%2F1686902566772.png?generation=1686902593485222&amp;alt=media" alt=""></li> </ul> <h2>6. Other Training Details</h2> <ul> <li>Image Size: 224 * 224</li> <li>Stride: 112</li> <li>Loss Function: SoftBCELoss</li> <li>Optimizer: AdamW</li> <li>Initial LR: 0.0001</li> <li>Scheduler: GradualWarmupSchedulerV2 with CosineAnnealingLR</li> <li>Batch_size: 32</li> <li>Epoch: 30</li> </ul> <h2>7. Inference</h2> <ul> <li>Ensemble of 8 Unet models with mit, pvt as backbones</li> <li>TTA in four directions(0°, 90°, 180°, 270°) in the inference stage</li> <li>Mit &amp; Pvt ensemble 👉Public LB: 0.721 Private LB: 0.586</li> <li>Single Pvt 👉Public LB: 0.726 Private LB: 0.591</li> </ul> <h2>8. Methods tried but did not work</h2> <ul> <li>DiceLoss❌</li> <li>DeNoise module❌</li> <li>Image size larger than 224 * 224❌</li> <li>Adding attention only to the output of the encoder without adding modules, referenced by <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> , is not effective❌</li> <li>The ensemble of mit and pvt seems to have less good results than the individual pvt❌</li> </ul>
CAFA 5 Protein Function Prediction
3rd Place Solution for the CAFA 5 Protein Function Prediction
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CAFA 5 Protein Function Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to Kaggle and hosts for organising such an interesting competition.<br> This competition task:"Predict the function of this protein using any available data" was linked to the fundamental of biology, making it a very exciting experience for me.<br> Thanks also to the community, the discussions and notebooks were very helpful.</p> <h2>Overview</h2> <p>I used the following NN models.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F548996%2Fbf4221217029bc9d2ae30b78bf0d577d%2F.png?generation=1703936744527010&amp;alt=media" alt="overview"></p> <h2>Protein Sequence</h2> <p>I have extracted embeddings using protein language models from a protein sequence.<br> The models I used include T5, ESM2t36, ESM2t48, and combinations of these.</p> <h2>Taxonomy Data</h2> <p>I one-hot encoded the taxonomy data and combined it with embeddings generated by protein language models. <br> During this process, only the 90 taxonomy IDs included in the test data were used.</p> <h2>Non-Experimental Annotation</h2> <p>The <a href="https://geneontology.org/docs/guide-go-evidence-codes/" target="_blank">evidence codes</a> of UniProt GOA annotations consist of 13 experimental labels and 11 non-experimental labels.<br> Experimental labels are used as GT and non-experimental labels are not used in training data.<br> So I extracted non-experimental labels from UniProt GOA data and used it as additional features. <br> The non-experimental labels were one-hot encoded and transformed into a tensor of dimensions (batch size, 11, number of GT labels). <br> Then this tensor is processed using a 1D-CNN which kernel_size is 1.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F548996%2F86368def7cbc9707ec8c8fb5c77f6217%2Fnon-exp-feat.png?generation=1704986539311314&amp;alt=media"></p> <h2>Validation Data</h2> <p>The training data was created from experimental annotations in UniProtKB as of November 17, 2022, and the test data (private leaderboard) consists of data experimentally verified after August 22, 2023. <br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F548996%2F6ba93a1b07cbdcdd3004f88a8ebb7952%2Fvalidation.png?generation=1703936790356554&amp;alt=media"><br> Therefore, I approached the validation strategy as if it were time-series data.<br> Validation data was created by extracting experimental annotations from the latest UniProtKB data at the time of the competition, and any sub-ontology data which wrere annotated in the training data was excluded from the validation data.</p>
Image Matching Challenge 2023
30th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Image Matching Challenge 2023 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations to everyone for the journey we had throughout the competition. Also, thanks to the organizers for bringing the Image Matching Challenge to Kaggle again.<br> I will give a brief of my solution to this challenge.</p> <h1>Architecture</h1> <p>I came to the competition with very limited time and only a little experience from IMC 2022. So, it is likely the first time I walked through the flow of 3D reconstruction.<br> I strictly followed the host pipeline, which I split into 3 modules:</p> <ul> <li>Global descriptors</li> <li>Local descriptors (matching)</li> <li>Reconstruction</li> </ul> <p>My main work was focused on improving their efficiency separately.</p> <h1>Global descriptors</h1> <p>From my point of view, well-trained models on a <strong>landmark</strong> dataset could provide better descriptors than ImageNet pre-trained backbones.<br> As a result, I utilized some of the models that I had trained to compete in the <a href="https://www.kaggle.com/competitions/landmark-recognition-2021" target="_blank">Google Landmark 2021</a>, and then concatenate them to a global descriptor:</p> <pre><code>EfficientNetV2-M EfficientNetV2-L . |--&gt;[concat]--&gt; [fc ] ResNeSt- / ResNeSt- / </code></pre> <h1>Local descriptors</h1> <p>This year, competitors are required to perform matching in a strict time interval.<br> I focused on <strong>detector-based</strong> (2-stage) methods only, because I thought I could save time on the points detector part (for example, to match <em>(image_i, image_j)</em> and <em>(image_i, image_k)</em>, semi-dense and dense methods will have to "extract" <em>image_i</em> two times). When I read other top team solutions, it seemed to be a wrong decision I had made, since such an amount of good matching models are omitted 😭. However, here is the list of methods I tried:<br> <strong>Detector</strong>: SuperPoint, SuperPoint + FeatureBooster, KeyNetAffNetHardNet, DISK, SiLK, ALIKE.<br> <strong>Matcher</strong>: SuperGlue, GlueStick, SGMNet, AdaLAM.<br> With the detector, I found that <strong>SuperPoint</strong> gave superior results than others.<br> With the matcher, <strong>SuperGlue</strong> showed the best performance in accuracy and efficiency. <strong>GlueStick</strong> is quite good but slower. <strong>SGMNet</strong> is quite fast but lower. I then ensemble keypoints and matches from their predictions and filter out duplicates.</p> <h1>Reconstruction</h1> <p>I didn't think I could improve much on this, so I only played with <strong>colmap parameters</strong> a bit to find out a (maybe) better combination than the default one.<br> Some parameters I changed:</p> <pre><code>max_num_trials b_images_freq b_max_num_iterations b_points_freq b_max_num_iterations init_num_trials max_num_models min_model_size </code></pre> <p>I could save a little time with a "lighter" combination of parameters while still keeping the accuracy.</p> <h1>Final thoughts</h1> <p>I guess mine is quite a simple solution, but still give me a silver :D.<br> However, the knowledge I gained from the competition may be the best I could achieve.<br> Thank you for your reading and happy Kaggling!</p>
Parkinson's Freezing of Gait Prediction
4th Place Solution: a MultiLayer Bidirectional GRU with Residual Connections
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Parkinson's Freezing of Gait Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to Kaggle and the competition hosts for this competition, and congratulations to the other teams! This was for me the first Kaggle competition in which I invested myself, and it was an awesome experience, I truly learnt a lot. </p> <p>The model that performed the best for me is a variant of a multi-layer GRU model in which some residual connections and fully connected layers have been added between the GRU layers:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1969534%2F18364882e67c41284297f4eaed4ddb47%2Fmodel.svg?generation=1686319620125192&amp;alt=media" alt=""></p> <p>Here are the PyTorch classes corresponding to this model: </p> <pre><code> (nn.Module): (): (ResidualBiGRU, self).__init__() self.hidden_size = hidden_size self.n_layers = n_layers self.gru = nn.GRU( hidden_size, hidden_size, n_layers, batch_first=, bidirectional=bidir, ) dir_factor = bidir self.fc1 = nn.Linear( hidden_size * dir_factor, hidden_size * dir_factor * ) self.ln1 = nn.LayerNorm(hidden_size * dir_factor * ) self.fc2 = nn.Linear(hidden_size * dir_factor * , hidden_size) self.ln2 = nn.LayerNorm(hidden_size) (): res, new_h = self.gru(x, h) res = self.fc1(res) res = self.ln1(res) res = nn.functional.relu(res) res = self.fc2(res) res = self.ln2(res) res = nn.functional.relu(res) res = res + x res, new_h (nn.Module): (): (MultiResidualBiGRU, self).__init__() self.input_size = input_size self.hidden_size = hidden_size self.out_size = out_size self.n_layers = n_layers self.fc_in = nn.Linear(input_size, hidden_size) self.ln = nn.LayerNorm(hidden_size) self.res_bigrus = nn.ModuleList( [ ResidualBiGRU(hidden_size, n_layers=, bidir=bidir) _ (n_layers) ] ) self.fc_out = nn.Linear(hidden_size, out_size) (): h : h = [ _ (self.n_layers)] x = self.fc_in(x) x = self.ln(x) x = nn.functional.relu(x) new_h = [] i, res_bigru (self.res_bigrus): x, new_hi = res_bigru(x, h[i]) new_h.append(new_hi) x = self.fc_out(x) x, new_h </code></pre> <p>Note that the code can be simplified: for my best model which performed a private lb score of 0.417, the "h" was actually always initialized with None. </p> <h2>Preprocessing</h2> <p>In terms of data, the choice I made for my model is very simplistic: consider only the accelerometer data (AccV, AccML, AccAP), merge the data from tdcsfog and defog together and train a single model on it. The main steps of my preprocessing pipeline are the following ones: </p> <ul> <li>downsample each sequence from their initial frequency (resp. 128 and 100 Hz) to 50Hz;</li> <li>for defog: <ul> <li>convert from g units to m/s^2 units;</li> <li>build a mask using "Valid" and "Task" to know which time steps are labeled during the training. The unlabeled time steps are fed to the model to get the full sequence context, but they are masked during the loss computation.</li></ul></li> <li>add a 4th "no-activity" class: the model is trained to recognize this class in the same way as the other classes. Outside of the loss, during the validation, i only use the 3 other classes to compute my metrics;</li> <li>per-sequence standard normalization (StandardScaler). </li> </ul> <p>Outside of the unlabeled time steps coming from defog, I did not use any other unlabeled data. </p> <p>For the prototype of another model i did not have the time to finish, I also began to consider some of the characteristics associated to the person who was producing the sequence, in particular I used "Visit", "Age", "Sex", "YearsSinceDx", "UPDRSIII_On" and "NFOGQ". This prototype was roughly following the same architecture as my best model ; the main idea was to initialize the GRU's hidden states with these characteristics, after using some fully connected layers to project them in the dimension of the hidden states. This prototype was also using 1D convolutions to extract features from the accelerometer data before passing them to the GRU layers, and I also considered adding dropout. I think that with more time for me to tune it, it would have beaten my current best model. The first version achieved a private lb score of 0.398. </p> <h2>Training details</h2> <p>My best model - the one which performed 0.417 on the private leaderboard - has been trained without any form of cross-validation, only with a train/validation split of 80% / 20%. To be honest, this model appeared as a prototype in my early experimentation process, and I considered stratified cross-validation only after. </p> <p>In this solution, I fed <strong>each whole downsampled (50Hz)</strong> sequence to my model, one after the other ie with <strong>a batch size of 1</strong>. Note that I would have been unable to feed some of the sequences to my model without downsampling them. I tried multiple different approaches with this architecture, but was unable to produce a better score when increasing the batch size. I tried multiple window sizes for my sequences ; however as I am pretty new in time series and as I also arrived pretty late in the competition, I did not implement any form of overlap and only thought about it too late. This could have probably been a key. Also when increasing the batch size, it seemed apparent that batch normalization was better than layer normalization. </p> <p>For the loss, I used a simple cross entropy. As the classes are pretty imbalanced (in particular with my 4th artificial one), I also considered using a weighted cross-entropy, using the inverse frequency of each class as a weight. I also considered trying a focal loss ; but these initial tests seemed unable to perform better than the cross entropy in my case. Despite these negative experiments, I still think that dealing with the imbalance nature of the problem in a better way than I did is important. </p> <p>In terms of optimizer, I used Ranger. I also tried Adam and AdamW and honestly i don't think this choice mattered too much. With Ranger I used a learning rate of 1e-3 with 20 epochs, with a cosine annealing schedule starting at 15. </p> <p>Note that I also used mixed precision training and gradient clipping. </p> <p>The best parameters I found for the architecture of my model are:</p> <ul> <li>hidden_size: 128;</li> <li>n_layers: 3;</li> <li>bidir: True.</li> </ul> <p>Later on, I also tried a stratified k-fold cross-validation in which I stacked the predictions of my k models via a simple average. The architecture and the training details for each fold were the same as for my 0.417 lb model, and this stacking process led to my 2nd best model, performing a score of 0.415 on the private leaderboard (with k=5). I also tried retraining my model on the whole dataset, but this approach did not improve my lb score. </p> <p>In no particular order, here are a few other architectures that I also tried but did not improve my score: </p> <ul> <li>replacing GRU by LSTM in my model: with my architecture, GRUs outperformed LSTMs in all the tests I've realized;</li> <li>multiple variants of <a href="https://www.researchgate.net/publication/349964066_Multi-input_CNN-GRU_based_human_activity_recognition_using_wearable_sensors" target="_blank">https://www.researchgate.net/publication/349964066_Multi-input_CNN-GRU_based_human_activity_recognition_using_wearable_sensors</a> ;</li> <li>a classic multi-layers bidirectional GRU followed by one or more fully connected layers, also with layer normalization and ReLUs. </li> </ul> <p>Edit: <br> Submission Notebook: <br> <a href="https://www.kaggle.com/zinxira/parkinson-fog-pred-4th-place-submission-notebook" target="_blank">https://www.kaggle.com/zinxira/parkinson-fog-pred-4th-place-submission-notebook</a></p> <p>Pretrained models "dataset": <br> <a href="https://www.kaggle.com/datasets/zinxira/models" target="_blank">https://www.kaggle.com/datasets/zinxira/models</a></p> <p>Full open-source code: <br> <a href="https://github.com/Zinxira/tlvmc-parkinsons-fog-prediction-4th-place-solution" target="_blank">https://github.com/Zinxira/tlvmc-parkinsons-fog-prediction-4th-place-solution</a></p>
Vesuvius Challenge - Ink Detection
1st place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Vesuvius Challenge - Ink Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>It was a very nervy ending to this competition, we held first for a long time but I am always scared of a shake-up. We tried to play it very defensively to ensure that we weren't doing anything to over-optimize for the leaderboard. Still, there is always a certain level of uncertainty when the public leaderboard is so small and your local validation is not extremely stable.</p> <p><strong>tl;dr</strong><br> At a high level, I would attribute most of our success to:</p> <ul> <li>Larger crops </li> <li>Strong depth-invariant models</li> <li>Averaging several models to give us better calibration</li> <li>Training against all available data after validating against fragment 1 rigorously</li> </ul> <p><strong>Data prep</strong><br> We initially started with the <a href="https://www.kaggle.com/code/tanakar/2-5d-segmentaion-baseline-training" target="_blank">2.5d starter code</a> and evolved it over time. What we ended up settling on was just taking the middle 16 layers, but we tried plenty of other things that didn't work. I will save that for another post. Initially, the code was configured to train against all of the crops of the image, but we filtered this very simply based on if a crop was blank. This greatly reduced our training time because many of the crops had no data in them at all. </p> <p>Early on there was very little progress being made and then we ran simple ablations to see how performance was impacted by the crop size and found that there was good scaling potential there. Seemed like one of the clearest patterns we saw early on. 128 got beat by 512 which got beaten by 1024. </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1035002%2Fadddcf67a8c6ed4f68716e050215eb2d%2FWB%20Chart%206_15_2023%203_13_17%20PM.png?generation=1686867276107142&amp;alt=media" alt=""></p> <p>It seemed that the x,y context was very important here because the crop could look at a whole letter or even a couple letters at a time and draw them out more completely. Smaller crops yielded much patchier-looking outputs. One nice thing about this approach was it actually didn't increase training time at all. if you ran it with 1024x1024 crops you had to train against way less crops than if you did 128x128 crops so the epoch time and wall-clock convergence time was virtually identical, unlike some other competitions where increasing the resize makes the computation take longer. </p> <p><strong>Models</strong><br> We started with the 2.5D approach but it became apparent that it wasn't optimal because the model was learning which layers had ink in them and we knew that this varied between fragments. We wanted a solution that was depth-invariant, that could detect ink in any layer. One approach would be training a simple 2d model on each slice of the 3d volume, but with the way the data was labeled, only 2d to start with, we would be giving bad signal to the model if we told it ink was in layers that it actually wasnt and vice versa. We had many discussions about how we handle this, 1D convolutions, max pooling, 3d convolutions with size (8, 1, 1) so it was only truly looking across the depth patterns. What we found to be the most performant was using strong 3d models that would output a new 3d volume with many channels and then we could flatten them along the depth axis with a max. </p> <p>So for example, our first approach in this vein was a simple 3dcnn. input shape was (batch_size, 1, 16, 1024, 1024). We applied 4 layers of 3d convolutions on top of this with progressively more filters until finally, we had an output volume of (batch_size, 16, 16, 1024, 1024). At this point we just took the max across the z-dimension to squish it down to (batch_size, 16, 1024, 1024) so our 16 depth dimensions were now replaced with 16 feature dimensions. This alone was a decent approach but then passing this through a strong 2d segmentation model made it much better. We heavily relied on segformer for this, as others have mentioned b3 backbone worked great, but we also found that directionally the even bigger models performed better on the leaderboard even if they didnt perform measurably better on local validation. </p> <p>We iterated on this design because it seemed to satisfy the qualities we wanted, a strong 2d segmenter applied to a 3d volume that was invariant to depth. We tried different first-stage 3d methods and found that even stronger 3d models yielded better results. Evolving to 3d unet's and then eventually 3d unetr. Sometimes the results were unclear if they were better but we found a clear trend that the on-paper better models also did better on the leaderboard. Our best individual model was a UNETR first stage that passed 32 channels into a b5 segformer which scored 0.82 on the public leaderboard and 0.67 on the private. We applied a small amount of dropout on the channels in between the 3d and 2d stage. </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1035002%2Fb59a1b2b524ffdbea3882fd6883e87ad%2FScreen%20Shot%202023-06-15%20at%204.12.57%20PM.png?generation=1686870810392089&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1035002%2Fd3566ee53b477abe6e4026db29d35531%2FScreen%20Shot%202023-06-15%20at%204.03.32%20PM.png?generation=1686870239615259&amp;alt=media" alt=""></p> <p>One thing I always wondered about was if the strength of the segformer wasnt actually that it was the best model but that it was making predictions at a lower resolution. We pass it the 1024x1024 and it returns back a 256x256 segmentation map. We simply upscaled it with a very simple conv2d transpose. I believe the second place solution verifies this that the lower resolution actually helps because it is doing much coarser classification which makes it easier than trying to more precisely get every pixel. </p> <p>Our final best solution was 9 different models</p> <table> <thead> <tr> <th>1st stage</th> <th>2nd stage</th> <th>Resolution</th> <th>Public Score</th> </tr> </thead> <tbody> <tr> <td>3d unet(16 channels)</td> <td>segformer b3</td> <td>1024</td> <td>.78</td> </tr> <tr> <td>3d unet(16 channels)</td> <td>segformer b3</td> <td>512</td> <td>.77</td> </tr> <tr> <td>3d unet(16 channels with SWA)</td> <td>segformer b3</td> <td>512</td> <td>.78</td> </tr> <tr> <td>3d cnn(32 channels)</td> <td>segformer b3</td> <td>1024</td> <td>.78</td> </tr> <tr> <td>3d cnn(32 channels)</td> <td>segformer b5</td> <td>1024</td> <td>.77</td> </tr> <tr> <td>3d cnn(64 channels)</td> <td>segformer b3</td> <td>1024</td> <td>.78</td> </tr> <tr> <td>3d unet(32 channels)</td> <td>segformer b5</td> <td>1024</td> <td>.79</td> </tr> <tr> <td>3d unetr(32 channels)</td> <td>segformer b5</td> <td>512</td> <td>.82</td> </tr> <tr> <td>3d unetr multiclass(32 channels)</td> <td>segformer b5</td> <td>512</td> <td>?</td> </tr> </tbody> </table> <p>One thing we tried late-on was multi-class output. Instead of binary we tried to predict nothing-mask-ink. This ended up yielding a much cleaner looking output actually, in most of our models we had some amount of noise anywhere there was papyrus and this seemed to quiet that down a lot. We did not get to explore it thoroughly enough to confirm if this really worked or not though. </p> <p><strong>Training procedure</strong><br> Our training procedure was fairly standard, we used the existing adamW optimizer, dice+bce loss and hyperparameters, fixing some small bugs in setting the min-lr and continuing to use the gradual warmup learning rate scheduling. We added on stochastic weight averaging to get wider optima instead of needing to pick a specific checkpoint because we found these pretty inconsistent. We found that validating against fragment 1 wasnt perfect but was at least directionally useful. Once we found something that worked on local validation against fold 1 we would submit that to the leaderboard to double-check its validity and we would continue training with all folds for several epochs after. We would submit and evaluate the new checkpoint trained against all folds and it was typically about 0.04 better. Sometimes we would train against all fragments from the start instead of training fragments 2, 3 and latter adding in 1, but we did not thoroughly evaluate this. </p> <p><strong>Augmentations</strong><br> We tried many permutations of augmentation, mostly from albumentations, some custom, and some from 3d packages, but ultimately couldn't find strong alpha there with anything fancy. Our best model was trained with:</p> <ul> <li>50% horizontal and vertical flips</li> <li>75% 90-degree rotations</li> <li>50% brightness contrast</li> <li>25% 1-2 channel dropout(in this case our channels was actually depth)</li> <li>10% shift scale rotate</li> <li>10% noise and blur</li> <li>10% coarse dropout</li> <li>10% grid distortion</li> </ul> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1035002%2F44380bdf9ac40366cfd12f6db39f1bef%2FScreen%20Shot%202023-06-15%20at%204.29.24%20PM.png?generation=1686871786148868&amp;alt=media" alt=""></p> <p>Overall seemed like the rotations and flips were crucial and everything else was a non-factor. Rotations seemed important but we did not ascertain that the test set itself was actually rotated, we just knew it was important so tried to make the model as invariant as possible. </p> <p><strong>Ensembling</strong><br> We ended up with a big pile of model checkpoints and had to whittle them down to what we believed to be the most performant trading off runtime vs throughput. We tried many different combinations, heavier tta with all rotations and flips, more models, smaller strides. The winner seemed to be 4x rotation TTA with 1/4 crop strided windows and as many good models as we could fit in. Halving the stride helped an extra .01, but that was trumped by being able to add way more models, flips didnt seem to add anything at all. It's possible with just the corrected orientation instead of TTA we could have done better but a model invariant to rotations seemed just as strong.</p> <p>One thing we struggled with a lot was how to best combine predictions. For each model we predicted on the same pixel 4x because of our strided approach and 4x of that because of TTA. With many models this actually gave us a ton of options for aggregating the predictions per pixel. We tinkered with a lot of stuff locally but what seemed to work best was averaging per pixel for each model all ~16x predictions and then applying the sigmoid to that averaged signal and then average those probabilities together. This posed a tough memory constraint on us on kaggles system so we had to be a bit efficient with putting things away and accumulating them as densely as possible instead of just creating large arrays and averaging at the end. </p> <p><strong>Thresholding</strong><br> One thing we went back and forth on for a long time was the calibration of predictions. As discussed in other posts, deciding a threshold was critical to getting good results and choosing the wrong threshold could give you very misleading signal on your models performance. During training and evaluation we would constantly be monitoring the AUC, Precision, Recall and f0.5 at a sweep of thresholds. Some models were well-calibrated with an optimal threshold at 0.5, but many were not. </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1035002%2Ffee3f2b5c722068974d291af29fc7049%2FScreen%20Shot%202023-06-15%20at%204.58.40%20PM.png?generation=1686873556582356&amp;alt=media" alt=""></p> <p>Because of this, we knew that even a great model could show up as terrible on the leaderboard if its threshold wasn't set correctly. We considered using the percentile method that some people used but did not even make a submission for it because it seemed too risky if the distribution of ink was not what we expected. Ultimately what we relied on was that averaged out our predictions they would end up calibrated. We found this to be true on our local validation and held true for the leaderboard as well. Individual models would have wide optimal threshold ranges but after averaging many predictions from many models it was almost universally centered on 0.5. We actually took our last submission to be brave and try 0.55 but it did worse on the public leaderboard and finished with only 30 minutes left to spare so we didnt pick it. It did end up performing slightly higher on the private leaderboard though. I think we actually left a lot on the table in terms of ensembling and optimal thresholding. </p> <p><strong>Postprocessing</strong><br> One little magic function we borrowed from the old cloud segmentation competition was used to clean up our predictions. After everything was done and already binarized we would use cv2.connectedcomponents to find the masses and remove them if they were beyond a certain size. This would clean up anything that was too small, just little speckles and noise. We found it locally optimal to set our threshold a little lower and then clean up the extras, anything under 25000 pixels, but we only cleaned up things under 10k on our submissions and didnt try more aggressive cleaning. </p> <p>Before</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1035002%2F25116a2dfc769b5784abb12704df8401%2F__results___27_30.png?generation=1686874373183834&amp;alt=media" alt=""></p> <p>After<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1035002%2F9a6ac60974f49a6ecfbb80f47fcccbb0%2F__results___27_31.png?generation=1686874392858212&amp;alt=media" alt=""></p>
CAFA 5 Protein Function Prediction
First Kaggle competition : 10th place public leaderboard; Nth place private leaderboard
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CAFA 5 Protein Function Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2><strong>Introduction</strong></h2> <p>Firstly I want to thank the organizers for a very nice competition - having it based on Kaggle opened up submissions to a wider range of people compared to previous iterations when mainly specialist labs showed results so the wider net of people being exposed to CAFA is really gratifying to see. My team worked hard on this problem and although we fell short in the rankings it is humbling to look back at the progress we made. Both of us are molecular biologists with only limited bioinformatics and machine learning experience - so this competition was something of a trial by fire and we learned a lot from taking part.</p> <h2><strong>Approach</strong></h2> <p>We used these datasets in our model: <br> 1) Prot-T5, ESM2, and Ankh Protein Language Model (PLM) embeddings. We carried out no further modifications or finetuning on the output of PLMs, only conversion to float32 to save memory. <br> 2) A single binary matrix representing species taxonomy for each protein.<br> 3) Text information obtained by tf-idf of abstract information from academic papers associated with each protein</p> <p>For GO labels, we separately identified 1500 BPO, 800 CCO, and 800 MFO terms from each ontology for each model to classify by taking the top N labels sorted by IA*frequency. </p> <p>Datasets were sorted in alphabetical order to ensure identical ordering across datasets as well as removal of single duplicate row in test data. </p> <p>We made a simple dense neural network in keras that takes these as separate inputs before concatenating them into a final layer that attempts to predict every label from a single GO ontology/domain. We used binary cross entropy (BCE) as our loss function and Adam optimizer with lr=0.0003 and a simple lr scheduler. We further used the IA weights as class_weight when calling model.fit(). We added dropout and batch normalization layers to our model as well.</p> <p>Our model was trained with KFold = 5 across 5 different random splits of the training data before a simple average of predictions was made for each ontology/domain and the results concatenated. </p> <p>One interesting thing we observed is that Ridge regression sometimes performed better on PLM embeddings compared to a neural network for some ontologies, which was unexpected and might indicate underfitting for the neural network. </p> <h2><strong>What didn't work</strong></h2> <p>ProtBert embeddings performed poorly and were not utilized at all. <br> Data from STRING did not prove helpful in the final model. <br> We attempted to use a separate transformer layer that takes in label embeddings created with Anc2Vec (<a href="https://academic.oup.com/bib/article/23/2/bbac003/6523148" target="_blank">https://academic.oup.com/bib/article/23/2/bbac003/6523148</a>) but it did not improve model performance.<br> We attempted to use a hierarchy aware loss function (<a href="https://papers.nips.cc/paper/2020/file/6dd4e10e3296fa63738371ec0d5df818-Paper.pdf" target="_blank">https://papers.nips.cc/paper/2020/file/6dd4e10e3296fa63738371ec0d5df818-Paper.pdf</a>) but it did not perform better than BCE.</p> <h2><strong>Scores</strong></h2> <p>Using merely the PLM embeddings along with taxonomic information we obtained a public leaderboard (PL) score of about 0.55. Including text information increases the PL score to ~0.58.<br> Ensembling with a public ensemble notebook (<a href="https://www.kaggle.com/code/adaluodao/merge-datasets" target="_blank">https://www.kaggle.com/code/adaluodao/merge-datasets</a>) increases PL score to ~0.62.</p> <h2><strong>EDIT</strong></h2> <p>4th place overall! <br> Code used to generate our models can be found here:<br> <a href="https://github.com/zongmingchua/cafa5" target="_blank">https://github.com/zongmingchua/cafa5</a> <br> Final ensembling is shown here:<br> <a href="https://www.kaggle.com/code/zmcxjt/merge-datasets" target="_blank">https://www.kaggle.com/code/zmcxjt/merge-datasets</a></p>
Vesuvius Challenge - Ink Detection
5th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Vesuvius Challenge - Ink Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1><strong>5th place solution</strong></h1> <p>I would like to say thank you to organizers for interesting competition. And for users <a href="https://www.kaggle.com/tanakar" target="_blank">@tanakar</a> and <a href="https://www.kaggle.com/samfc10" target="_blank">@samfc10</a> for their great public notebooks which gave me good start in this competition. My solution is not very sophisticated and based on 3D Resnet models, however it gives good result on both public and private LB.</p> <p><strong>Models:</strong></p> <p>3dResnet with architectures resnet18 and resnet34</p> <p><strong>Data preprocessing:</strong></p> <p>empty tiles which are not belong to papirus tissue ignored</p> <p>split: 4 folds (1,2a,2b,3)</p> <p>tile size:256</p> <p>stride:128</p> <p>32 slices between 16 and 48 indices</p> <p><strong>Training:</strong></p> <p>epoches: 50</p> <p>fp16</p> <p>loss: BCE</p> <p>optimizer: AdamW</p> <p>scheduler: GradualWarmupScheduler</p> <p>hard augmentation: mosaic augmentation + albumentation augmentation</p> <p>mosaic augmentation:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1906164%2Ffcc229a2841ba660e0ef8e93518a1c92%2Fmosaic.png?generation=1686922603788993&amp;alt=media" alt=""></p> <p>albumentation augmentation:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1906164%2F2f1ca34594848cb7464023bf1f379d38%2Falbu.png?generation=1686921911927180&amp;alt=media" alt=""></p> <p><strong>Inference:</strong></p> <p>final ensemble models: 2 folds(1,2a) 3dResnet18 models + 4 folds 3dResnet34 models</p> <p>tile size: 1024</p> <p>stride:512</p> <p>output predictions close to the tile's edges are ignored</p> <p>threshold: 0.5url</p> <p>TTA: 4 rotates, h/v flips</p> <p>denoizing</p> <p>Result: 0.804 on Public LB and 0.668 on Private LB</p> <p><strong><a href="https://github.com/aksell1981/inkdet_solution" target="_blank">Training code</a></strong><br> <strong>Inference notebook:</strong> <a href="https://www.kaggle.com/code/aksell7/3dresnet18-3dresnet34-infer/notebook" target="_blank">https://www.kaggle.com/code/aksell7/3dresnet18-3dresnet34-infer/notebook</a></p>
Benetech - Making Graphs Accessible
6th place solution - deplot & UNet postprocessing
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Benetech - Making Graphs Accessible <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to kaggle and benetech for hosting an interesting problem. It was a great team effort with <a href="https://www.kaggle.com/darraghdog" target="_blank">@darraghdog</a> </p> <h2>TLDR</h2> <p>Our solution is a 4 seed blend of a single deplot model for all chart types. For scatter plots we run additional post processing by using the number of scatter points predicted by a U-Net point segmentation model. </p> <h2>Data preprocessing</h2> <h3>Benetech dataset</h3> <p>For training we used all of benetech extracted data, and approx 20% of the generated data which was selected based on the hardest samples. This helped speed up the training pipeline significantly. <br> Numerical values had a lot of digits in ground truth which did not match well with our loss function (cross entropy) in pix2struct based models. Therefore, it was important to round these numerical values. For this, we bucketed the labels to approx 5 buckets within the tick label axis. So, for example, the buckets in the below would be <code>0, 40000, 80000, 120000, 160000, 200000, 240000… etc.</code> and the y-axis label to learn would be <code>840000, 960000, 760000, 700000, 580000, 460000, etc.</code>. For ground truth labels which had no tick labels, similar rounding was performed based on the min-max range of the data series, using approximately 40 buckets. </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1424766%2Fdd90fda19f8e33c767b75e32188a0c29%2FScreenshot%202023-06-20%20at%2021.37.57.png?generation=1687289889671066&amp;alt=media" alt=""></p> <p>Within the pix2struct model, the labels were learned as key-value sequences, similar to the training of deplot. For the example above, that would be<br> <code>&lt;pad&gt;Malaysia;960000|Malawi;840000|Maldives;760000|Mali;700000|Mauritania;660000|Melanesia;580000|Malta;580000|Mauritius;540000|Martinique;460000|Mayotte;420000&lt;\s&gt;</code></p> <p>A separate decoder was used to learn the reversed key-value sequence. This was linked to the same encoder and added as an auxiliary loss so the model would not focus more on either side of the image. Since the heavy part of deplot is the encoder, a double decoder did not create much overhead.</p> <h3>External data</h3> <p>For this competition, we utilized a commercially available subset of PubMed articles to train our models. Approximately 80% of the labeled samples in the ICDAR dataset were found to originate from this commercial dataset. We ensured that the use of this dataset complied with the applicable licensing terms. To expand our training data, we applied pseudo-labeling techniques followed by manual review. Specifically, we pseudo-labeled and manually reviewed around 8,000 bar and line graph samples from PubMed articles. The pseudo-labeling results were quite accurate on bars, requiring only verification that the identified charts were the correct type (e.g. removing box plots). Lines needed some corrections. We also used our U-Net model to pseudo-label approximately 1,000 scatter plot samples from PubMed, and then manually added any missing data points. Using the VGG Image Annotator tool, this manual annotation process was surprisingly fast and efficient. </p> <h3>Data generation</h3> <p>We used the great <a href="https://www.kaggle.com/code/brendanartley/benetech-5-chart-types-generator" target="_blank">public kernel</a> by <a href="https://www.kaggle.com/brendanartley" target="_blank">@brendanartley</a> in two ways</p> <p>We pretrained our deplot models using the 500k images downloadable the public kernel, which gave a small improvement, compared to using the original checkpoint from huggingface. </p> <p>We also tweaked the kernel a bit to generate more sophisticated scatter plots (more points, different markers, more overlap etc) and output x/y coordinates of each scatter point. We generated 50k scatter plots with that script and used them additionally for training the segmentation model</p> <h2>Models</h2> <p>Deplot was used for predicting bars, lines and scatter; for scatter, we additionally used a u-net model. </p> <p>Deplot was trained for 20 epochs, with a learning rate of approx 5e-5 and batchsize 8. <a href="https://github.com/albumentations-team/albumentations/blob/master/albumentations/augmentations/transforms.py#L2413" target="_blank">Pixeldropout augmentation</a> was found to prevent overfitting. No other image augmentations were used. For experimentation, 1024 patches were used which decreased training time 2X with only a slight degradation in scores. The loss was a combination of cross entropy on the key value sequences mentioned above and the chart type. A linear layer on top of the first token of deplot was used to predict the chart type as an auxiliary.</p> <p>For scatter plots we used a U-Net (EfficientNet-B7 encoder) + non maximum suppression (NMS) to predict number of scatter points and performed a simple yet effective postprocessing where deplot predicted a different number of points:</p> <ul> <li>If deplot predicted too many points, disregard the last ones</li> <li>If deplot predicted too few points, fill missing with the mean value</li> </ul> <p>Below you can see the segmentation mask for U-Net training. Using the scatter point annotation we created “smooth” points with 10 pixel gaussian radius and drew onto an empty image. </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1424766%2Fe0cbc56d90a54892ee8a4365b03d2f59%2FScreenshot%202023-06-20%20at%2021.44.38.png?generation=1687290294572628&amp;alt=media" alt=""></p> <h2>Ensembling</h2> <p>For deplot we did inner ensembling by averaging decoder prediction in each greedy decoding step. For Unet we simply averaged predicted masks before performing NMS.</p> <h2>Ablation study (Public LB Scores)</h2> <p>The following table gives a quick overview, how much each component contributed to our solution</p> <table> <thead> <tr> <th>component</th> <th>score</th> </tr> </thead> <tbody> <tr> <td>Baseline deplot@1024 tokens</td> <td>0.71</td> </tr> <tr> <td>using fullfit</td> <td>+0.01</td> </tr> <tr> <td>1024 -&gt; 2048 tokens</td> <td>+0.03</td> </tr> <tr> <td>Scatter pp Unet@512x512</td> <td>+0.05</td> </tr> <tr> <td>Pixeldropout</td> <td>+0.01</td> </tr> <tr> <td>External data</td> <td>+0.02</td> </tr> <tr> <td>Pretrain on 500k</td> <td>+0.01</td> </tr> <tr> <td>Unet@512x512 -&gt; 768x768</td> <td>+0.01</td> </tr> <tr> <td>3x Unet@768x768</td> <td>+0.01</td> </tr> <tr> <td>4x seed blend of deplots</td> <td>+0.01</td> </tr> <tr> <td><strong>final score</strong></td> <td><strong>0.87</strong></td> </tr> </tbody> </table> <h2>Used tools/ repos</h2> <ul> <li>Unet:<a href="https://github.com/qubvel/segmentation_models.pytorch" target="_blank"> Segmentation models pytorch</a> + <a href="https://github.com/huggingface/pytorch-image-models/tree/main/timm" target="_blank">timm encoder</a></li> <li>Deplot: <a href="https://huggingface.co/google/deplot" target="_blank">huggingface</a></li> <li>logging/ visualization: <a href="https://neptune.ai/" target="_blank">neptune.ai</a></li> <li>augmentation: <a href="https://albumentations.ai/" target="_blank">albumentations</a></li> <li>labelling: <a href="https://www.robots.ox.ac.uk/~vgg/software/via/" target="_blank">VGG Image Annotator tool</a></li> </ul> <p>Thank you for reading, questions welcome</p> <p>edit:</p> <p>June 21st: inference kernel made public: <a href="https://www.kaggle.com/code/christofhenkel/benetech-6th-place-dd/notebook" target="_blank">https://www.kaggle.com/code/christofhenkel/benetech-6th-place-dd/notebook</a></p>
Parkinson's Freezing of Gait Prediction
57th place solution: Conv1d model with aux head
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Parkinson's Freezing of Gait Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi! 👋 I want to share my solution with you.<br> First of all, I want to thank the organizers of this competition!</p> <p>Then I want to thank the participants who made useful EDA notebooks and notebooks with baselines.<br> <em>This solution was inspired by these two notebooks: <a href="https://www.kaggle.com/code/coderrkj/parkinson-fog-pred-conv1d-separate-tf-model" target="_blank">this one</a> and <a href="https://www.kaggle.com/code/mayukh18/pytorch-fog-end-to-end-baseline-lb-0-254" target="_blank">this one</a>.</em></p> <h1>Data preprocessing</h1> <p>For train and inference, I used the rolling window technique. The window was made so the target timestep is in the middle of this window.<br> Size of the window for training: 256<br> Size of the window during inference: 1224</p> <p><strong>AccV, AccML, AccAP</strong> features were preprocessed with savgol filter (kernel=21, n=3).<br> For the <strong>Time</strong> feature I used <code>pandas.qcut</code> to discretize it with 10 bins.</p> <h1>Model architecture</h1> <p>I used Conv1D model with two heads.<br> The first head (multilabel) is the main one, which performs multilabel classification (<em>StartHesitation, Turn, Walking</em>).<br> The second head (binary) is aux head, which classifies into two classes: <em>event</em>, <em>no event</em>.</p> <p>Below is the model architecture.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4268817%2Fa84cc1f97065442d4183a84de2bdf98f%2Fpfogp_solution.drawio.png?generation=1686319440790344&amp;alt=media" alt=""></p> <p>In this model, I also used Conv1D layer with kernel size 8 and stride 8. This helps to reduce input size while keeping important information as much as possible.</p> <h1>Training</h1> <p>I trained <strong>5 models</strong> on split dataset (by subject_id) using <code>GroupKFold</code>.<br> <strong>Optimizer</strong>: AdamW + LookAhead<br> <strong>LR</strong>: 0.0002<br> <strong>Scheduler</strong>: OneCycleLR<br> <strong>Loss</strong>: FocalLoss (binary head) + BCEWithLogitsLoss (multilabel head)<br> <strong>Epochs</strong>: 20</p> <h1>Inference</h1> <p>Inference was performed using an ensemble of 5 models.<br> CV scores for each of the model: 0.16, 0.175, 0.198, 0.254, 0.317</p> <p>Then I found that using a bigger window size during inference gives better results than using a window size of 256 (which the model was trained on).<br> Also, using an ensemble of different window sizes (3 groups of 5 models) also gives better results.</p> <p>Here is the table of some of my submissions</p> <table> <thead> <tr> <th>N of models</th> <th>Window size</th> <th>Public score</th> <th>Private score</th> </tr> </thead> <tbody> <tr> <td>10 (5 + 5)</td> <td>1024 + 1224</td> <td>0.37</td> <td>0.303</td> </tr> <tr> <td>5</td> <td>1024</td> <td>0.352</td> <td>0.307</td> </tr> <tr> <td>15 (5 + 5 + 5)</td> <td>256 + 512 + 1024</td> <td>0.351</td> <td>0.311</td> </tr> </tbody> </table> <p>For the final submission, I have chosen the first one (😭).</p> <h1>Interesting takeaways</h1> <p>During doing experiments I found several interesting takeaways:</p> <ul> <li>Aux stuff (like heads or losses) can improve performance score.</li> <li>Smaller models perform better than bigger ones</li> <li>ConvNets perform better than transformers with approximately the same number of parameters</li> <li>Using a bigger window size for training reduces LB score, but using a bigger window size during inference (when the model was trained on a smaller window size) increases LB.</li> </ul> <h1>Conclusions</h1> <p>It was an interesting experience for me to participate in this competition. I learned several useful optimization and training tricks (e.g. using aux head). I hope you will find my solution interesting too!<br> If you have any questions about my solution, I would be happy to answer them! 😊</p>
Image Matching Challenge 2023
12nd Place Solution: SP+SG
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Image Matching Challenge 2023 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to the competition organizers and Kaggle staff for hosting this amazing competition and solid support. Thanks to each my team member for their creative perspective and effort which gain us this place in competition.</p> <ol> <li><p>Overview<br> Our final model is rather simple. The solution is based on a modular toolbox named Hierarchical-Localization (implementation of typical SP+SG structure), and the key modification is to set the input resolution to 2000.<br> As the resolution increased from 1024 to 2000, the public LB scores increased from 0.24 to 0.46. As the resolution further increased to 3000, there is still improvement in the trainset (especially "wall").</p></li> <li><p>Configuration in detail </p> <pre><code>confs = { 'superpoint_aachen': { 'output': 'feats-superpoint-n-r', 'model': { 'name': 'superpoint', 'nms_radius': , 'max_keypoints': , }, 'preprocessing': { 'grayscale': True, 'resize_max': , }, } } </code></pre> <pre><code>'superglue': { 'name': 'superglue', 'weights': 'outdoor', 'sinkhorn_iterations': , } </code></pre></li> <li><p>Tricks didn't work<br> Our code suffers from randomness; the difference in public LB score for the same code can be up to 0.04! Unfortunately, during the whole competition, we failed to find a way to eliminate randomness. We decide a modification useless if no obvious improvement observed during repetitive submissions, but still, there are possibly mistakes.</p> <ul> <li>Image retrieval<br> After incremental mapping , try to add the images mapped failed to the successfully reconstructed model.</li> <li>TTA<br> Reverse and concatenate and apply NMS to features extracted from [original left-right-flip 10-deg-rotation], however, computation time hugely increased while no obvious improvement observed.</li> <li>Ensemble SIFT to SP<br> We learned that SP fails when the image is extremely in-plane rotated. So we tried to perform SIFT extraction and matching using pycolmap and extract the match from the database and concatenate with SPSG. This trick can improve the score for "cyprus" but is useless in this year's test set. Worth noting is that it doesn't harm the LB score either.</li> <li>Multi-models and multi-resolution<br> We tried to ensemble SPSG with DKM, Loftr, Quadtree, silk, but these models perform poorly despite being in the same resolution with SPSG. Some of them are one-stage models, which made them inefficient for the task. We tried to concatenate [800, 1500, 2000] resolutions calculated by SPSG; the result is nearly the same with 2000 alone.</li> <li>Suppress randomness by more iterations<br> Hloc uses geometry verification (<code>pycolmap.verify_matches</code>), set <code>max_num_trials=40000</code> didn't work.</li></ul></li> <li><p>Some experience from the competition<br> Reading images using cv2 can lose EXIF information, which is not conducive to reconstruction. But there's a counter-example, "theater" can score even higher after the information removed.</p></li> <li><p>Referrence</p></li> </ol> <ul> <li>Hloc<br> <a href="https://github.com/cvg/Hierarchical-Localization/" target="_blank">https://github.com/cvg/Hierarchical-Localization/</a></li> <li>Superpoint introduction<br> <a href="https://github.com/magicleap/SuperPointPretrainedNetwork/blob/master/assets/DL4VSLAM_talk.pdf" target="_blank">https://github.com/magicleap/SuperPointPretrainedNetwork/blob/master/assets/DL4VSLAM_talk.pdf</a></li> </ul>
Parkinson's Freezing of Gait Prediction
20th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Parkinson's Freezing of Gait Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you to the organizers for the fun competition and everyone who participated.<br> I share my solution.</p> <h1>Summary</h1> <ul> <li>dataset: tdcsofg + defog (not use notype/unlabeled)</li> <li>model: Conv1d NN (common model for tdcsfog and defog) </li> <li>6 models ensemble</li> </ul> <h1>1. baseline model</h1> <ul> <li>I use this notebook. <a href="https://www.kaggle.com/code/mayukh18/pytorch-fog-end-to-end-baseline-lb-0-254" target="_blank">PyTorch FOG End-to-End Baseline [LB 0.254] </a></li> <li>Thank you for <a href="https://www.kaggle.com/mayukh18" target="_blank">@mayukh18</a> </li> </ul> <h1>2. Validation Strategy</h1> <ul> <li>GroupKFold : 5-fold (groups=Subject)</li> <li>Since it was imbalanced data, I adjusted the seed so that the number of cases and the ratio would be the same as much as possible</li> </ul> <h1>3. Preprocess data</h1> <ul> <li>Align the units of acceleration (divided by 9.8066 for tdcsfog). </li> <li>features:<ul> <li>raw data: AccV, AccML, AccAP</li> <li>AccMG: np.sqrt(AccV^2 + AccML^2 + AccAP^2)</li> <li>Time_freq: df["Time"] / df["Time"].max()</li> <li>tdcs flag</li> <li>Rolling Window Features</li></ul></li> </ul> <pre><code> for col in [, , , ]: for w in [10, 50, 100, 1000]: df[f] = df[col] df[f] = df[col] - \ df[col] df[f] = df[col] - df[col] </code></pre> <h1>4. Model</h1> <ul> <li>I define 2 models.<ul> <li>model A: <ul> <li>input's feature: 54 features (all feature)</li> <li>window_size=32: past=24, future=8, wx=8</li> <li>Input - (Conv1d:ks=3/5/10 - GAP) x 3 - MLP - Outputs</li></ul></li> <li>model B:<ul> <li>input's feature: 6 features (3raw + MG + Time_freq + tdcs)</li> <li>window_size=256: past=192, future=64, wx=1</li> <li>Input - Conv1d:ks=90 - GAP - MLP - Outputs</li></ul></li></ul></li> <li>training parameter:<ul> <li>loss: BCEWithLogitsLoss / BCE+CELoss(4label)</li> <li>optimizer: Adam(2e-5)</li> <li>epoch: 10</li></ul></li> </ul> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F164129%2F1d909bdab5119592b4ed78f061040fb6%2Fmodel.png?generation=1686321028285326&amp;alt=media" alt="model.jpg"></p> <h1>5. Postprocess</h1> <ul> <li>moving average of predicted values: window=500</li> <li>label 1 is continuous so I could increase score just a little. (+0.004)</li> </ul> <h1>6. Ensemble</h1> <ul> <li>I trained 6 models.</li> <li>The ensemble method is an equally weighted average.</li> </ul> <table> <thead> <tr> <th>id</th> <th>inputs</th> <th>win_size</th> <th>Acc-units</th> <th>model</th> <th>local-cv</th> <th>public</th> <th>private</th> </tr> </thead> <tbody> <tr> <td>v10</td> <td>54feats</td> <td>32(wx=8)</td> <td>None</td> <td>modelA</td> <td>0.290</td> <td>0.392</td> <td>0.316</td> </tr> <tr> <td>v13</td> <td>54feats</td> <td>32(wx=8)</td> <td>None</td> <td>modelA (custom)</td> <td>0.300</td> <td>0.376</td> <td>0.305</td> </tr> <tr> <td>v51b</td> <td>54feats + 2feats</td> <td>48(wx=8)</td> <td>div by 9.8</td> <td>modelA</td> <td>0.302</td> <td>0.386</td> <td>0.321</td> </tr> <tr> <td>v60</td> <td>54feats</td> <td>32(wx=8)</td> <td>div by 9.8</td> <td>modelA (+CELoss)</td> <td>0.301</td> <td>0.388</td> <td>0.306</td> </tr> <tr> <td>v59</td> <td>54feats</td> <td>32(wx=8)</td> <td>None</td> <td>modelA (other cv)</td> <td>0.324</td> <td>0.390</td> <td>0.312</td> </tr> <tr> <td>v58</td> <td>3raw+3feats</td> <td>256(wx=1)</td> <td>div by 9.8</td> <td>modelB</td> <td>0.306</td> <td>0.350</td> <td>0.312</td> </tr> <tr> <td></td> <td></td> <td></td> <td></td> <td></td> <td>ensemble</td> <td>0.411</td> <td>0.324</td> </tr> </tbody> </table> <h1>Did't work</h1> <ul> <li>Transformer / LSTM / 1dcnn+LSTM / wavenet : did not improve score</li> <li>Data conversion by Fourier transform / MFCC: score got worse</li> <li>RobustScaler: almost the same</li> <li>pseudo-labeling for notype and unlabeled-data: score got worse</li> <li>use Subject data: score got worse</li> <li>separate model for tdcsfog and defog: score got worse</li> </ul> <p>There were a few methods that worked for the top teams. I may have done something wrong.</p> <h1>My question</h1> <ul> <li>Why did I get similar scores with and without matching acceleration units? (Sometimes it is better not to match)</li> <li>I didn't want to include Time_freq to make it a generic model. How can I improve my score without this?</li> </ul> <p>Thank you for reading.</p>
Benetech - Making Graphs Accessible
3rd place Solution - Matcha & Object Detection
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Benetech - Making Graphs Accessible <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, congrats to all the participants especially to those that managed to stick it to the end. This was overall a really interesting challenge, and we really enjoyed tackling it. Anyone in 0.86+ public scores could've taken the win so we're glad we dropped only 2 spots.</p> <p>Thanks to <a href="https://www.kaggle.com/crodoc" target="_blank">@crodoc</a> for the great teaming experience, we had two different opinions on how to tackle the problem : he went for and end-to-end approach whereas I wanted to do object detection &amp; OCR. This proved to be ideal for this competition as object detection compensated for end-to-end weaknesses.</p> <h2>Overview</h2> <p><a href="https://ibb.co/qFXF9vq"><img src="https://i.ibb.co/84T42Hv/cls.png" alt="cls"></a></p> <p>Our solution is a two-step pipeline, where the first step is a simple classification task and the second step solves the task for the different chart types. For scatter and dot plots, we used a detection approach. For line and bar charts, Matcha was really strong. Here are our LB scores :</p> <table> <thead> <tr> <th></th> <th>Overall</th> <th>Scatter</th> <th>Dots</th> <th>Line</th> <th>Horizontal Bar</th> <th>Vertical Bar</th> </tr> </thead> <tbody> <tr> <td><strong>Public</strong></td> <td>0.87</td> <td>0.09</td> <td>0</td> <td>0.33</td> <td>0.04</td> <td>0.39</td> </tr> <tr> <td><strong>Private</strong></td> <td>0.71</td> <td>0.28</td> <td>0.01</td> <td>0.13</td> <td>0.01</td> <td>0.27</td> </tr> </tbody> </table> <h2>Validation &amp; Insights</h2> <p>From the description page : </p> <blockquote> <p>The extracted figures in the training and public test sets are drawn from the same set of sources. The figures in the private test set are drawn from a distinct set of sources.</p> </blockquote> <p>Public LB is leaky, we need to create models that generalize to new data sources. Therefore, we use all the extracted data as validation, and train only on generated data. Data provided by the hosts is not diverse enough to allow for generalization so we generated our own. Matcha and classification models were fullfit on all the data after careful parameter optimization, but detection models were not trained on any extracted data.</p> <p>For validating on dots, we used a small curated dataset of 20 images found on google images. We estimated via probing that there are between 100 and 125 dots in the test set, all of them most likely being in the private test set. We did not probe for scatter though, this would've been helpful. </p> <h2>Step 1 - Classification</h2> <p>Nothing too fancy here, we trained models on (benetech + theo + crodoc) generated data.</p> <p><strong>Main parameters:</strong></p> <ul> <li>2 epochs on 88k images. </li> <li>lr <code>3e-4</code> or <code>5e-4</code> (2 lrs for blend, we also used 2 seeds).</li> <li>Mixup and some color augmentations mostly.</li> <li>256x384 image size.</li> <li>NfNet-l2 with 0.2 dropout.</li> </ul> <h2>Step 2.a - Scatter</h2> <p><a href="https://ibb.co/5v4cM9C"><img src="https://i.ibb.co/ct3TgvR/scatter-pipe.png" alt="scatter-pipe"></a></p> <p>For scatter, we relied on <a href="https://github.com/Megvii-BaseDetection/YOLOX" target="_blank">YoloX</a> to detect all markers. Cached is used to take care of the other useful elements. If all the points are detected correctly, it’s not that complicated to infer the target : detect the ticks &amp; labels, read the labels, and interpolate !</p> <p><strong>More details:</strong></p> <ul> <li>Ensemble YoloX-m and YoloX-l with NMS, helps reduce the number of FNs.</li> <li>Models are trained on scatter + dot generated benetech data + a bunch of plots I generated and pseudo labeled, for 10 epochs.</li> <li>CV 0.67, public LB 0.09~, private 0.29 - performance drop almost exclusively comes from overlapping/too hard to detect markers.</li> <li>Bunch of post-processing to make the pipeline more robust to OCR mistakes and detection mistakes.</li> <li>We were initially using Yolo-v7, but had to switch to YoloX because of the first rule change. It took us a week to match Yolo-v7 performance with YoloX.</li> </ul> <h2>Step 2.b - Dots</h2> <p><a href="https://ibb.co/sgMLzxG"><img src="https://i.ibb.co/TwFzdjS/dot-pipe.png" alt="dot-pipe"></a></p> <p>Dot pipeline is similar to scatter, but a bit more simple. We detect the points and cluster them, and map them to the detected x-labels. Labels with no assigned cluster are given the target 0, others the number of detected points. Counting points is not really robust to detection mistakes, so instead we used the height of the uppermost point and interpolated.</p> <h2>Step 2.c - Bars &amp; Lines</h2> <p>Matcha was really powerful here. We used <code>matcha-base</code> and set <code>is_vqa=False</code> to avoid giving texts as input to the model.</p> <p>We trained Matcha to predict chart type, xs and ys for an image. The ground truth looked the same as <a href="https://www.kaggle.com/nbroad" target="_blank">@nbroad</a> used for his donut approach (except we dropped the prompt token). We tried other approaches, but this worked best :</p> <pre><code> = X_START + .join(list(map(str, xs))) + X_END = Y_START + .join(list(map(str, ys))) + Y_END = + chart_type + + x_str + y_str </code></pre> <p>e.g: <code>&lt;line&gt;&lt;x_start&gt;0;2;4;6&lt;x_end&gt;&lt;y_start&gt;2.7;2.2;3.6;5.2;&lt;y_end&gt;</code></p> <p>The most valuable boost we got by generating additional charts using matplotlib. We reused the values and texts from the train dataset to generate ticks &amp; values, with different styles/patterns/fonts/colors for diversity. The code for generating additional images has around 1000 lines and basically covers most cases where the model was failing when validated on the "extracted" dataset (e.g. negative values, line edges, missing bars, multiline text, text rotations).</p> <p><strong>Other things that helped :</strong></p> <ul> <li>Fix the number of decimal points for numerical values using their range :<br> <code>number_of_decimals = max(0, round(np.log10(1/ (max(y_ticks) - min(y_ticks))) + 3)</code></li> <li>We use an additional chart type : histograms, to learn that such charts have one less y value. </li> <li>Add an additional cross entropy loss for chart types.</li> <li>Ensemble several (4) models (+0.01 public) :<ul> <li>Voting to compute the number of outputs and fix obvious mistakes.</li> <li>Voting for categorical predictions.</li> <li>Averaging for continuous predictions.</li></ul></li> </ul> <p><strong>More details:</strong></p> <ul> <li>lr 3e-5, cosine with 0.25 cycle.</li> <li>10 Epochs.</li> <li>Save weights every 0.25 epochs and use a model soup of all checkpoints from epoch &gt; 1.</li> <li>Augs : Color transforms, image compression and random scaling.</li> </ul> <h2>Final Words</h2> <p>We were greatly perturbed by the unexpected second rule change, especially considering the fact that I was on vacation. Matcha and classification models were retrained on ICDAR during the last 3 days, this gave a 0.01 public LB boost but nothing on private. </p> <p>Thanks for reading !</p>
Image Matching Challenge 2023
(Prize Eligible) 7th place Solution using a novel matcher LightGlue
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Image Matching Challenge 2023 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Intro</h1> <p>Hi there, we are Alex, Andri, Felix, Deep and Philipp, 4 master students and one PhD from ETH Zurich.</p> <p>First of all, we would like to thank the organizers for hosting this fun and interesting challenge, we have greatly enjoyed it! We also greatly enjoy reading about the solution of other teams, who present captivating and innovative concepts through their fantastic works.</p> <p>We have been pushing for a prize-eligible version that could compete against solutions using SuperPoint (SP) and SuperGlue (SG). In order to achieve this, we tried out various replacements for SP such as DISK and ALIKED and moved from SuperGlue to <a href="https://github.com/cvg/LightGlue" target="_blank">LightGlue</a> (LG), a cheaper and more accurate local feature matcher developed at ETHZ which is released under the APACHE license. While LG provided very promising results, we were unable get a satisfactory score without using SP until the very last submission on the last day. This last submission, using an ensemble of ALIKED, DISK and SIFT, gave us enough confidence to choose it as our final submission. However, our best scoring submission would have been an ensemble using DISK, SIFT with SP which would have matched the score of 2nd place (0.562). Additionally, we had another submission that was able to match the score of 5th place on the private leaderboard but did not produce convincing train nor public scores.</p> <table> <thead> <tr> <th>Features</th> <th>Matchers</th> <th>Train</th> <th>Public</th> <th>Private</th> </tr> </thead> <tbody> <tr> <td>ALIKED*</td> <td>LG</td> <td>0.763</td> <td>0.361</td> <td>0.407</td> </tr> <tr> <td>ALIKED+SIFT*</td> <td>LG+NN</td> <td>0.594</td> <td>0.434</td> <td>0.480</td> </tr> <tr> <td>DISK*</td> <td>LG</td> <td>0.761</td> <td>0.386</td> <td>0.437</td> </tr> <tr> <td>DISK+SIFT*</td> <td>LG+NN</td> <td><strong>0.843</strong></td> <td>0.438</td> <td>0.479</td> </tr> <tr> <td>ALIKED2K+DISK*</td> <td>LG+LG</td> <td>0.837</td> <td>0.444</td> <td>0.488</td> </tr> <tr> <td>ALIKED2K+DISK+SIFT**</td> <td>LG+LG+NN</td> <td>0.837</td> <td><strong>0.475</strong></td> <td>0.523</td> </tr> <tr> <td>ALIKED2K+DISK+SIFT</td> <td>LG(h)+LG(h)+NN</td> <td>0.824</td> <td>0.450</td> <td><strong>0.529</strong></td> </tr> <tr> <td>------------------------</td> <td>------------</td> <td>---------</td> <td>---------</td> <td>---------</td> </tr> <tr> <td>DISK+SP</td> <td>LG+LG</td> <td>0.876</td> <td>0.484</td> <td><strong>0.562</strong></td> </tr> <tr> <td>DISK+SP*</td> <td>LG+SG</td> <td>0.880</td> <td>0.498</td> <td>0.517</td> </tr> <tr> <td>DISK+SIFT+SP*</td> <td>LG+NN+LG</td> <td><strong>0.890</strong></td> <td><strong>0.511</strong></td> <td>0.559</td> </tr> <tr> <td>DISK+SIFT+SP*</td> <td>LG+NN+SG</td> <td>0.867</td> <td>T/o</td> <td>T/o</td> </tr> </tbody> </table> <p>** was our final submission, * have been submitted after the deadline, LG(h) has an increased matching threshold of LightGlue of 0.2 (default is 0.1).</p> <h1>LightGlue vs SuperGlue</h1> <p>LightGlue is an advanced matching framework developed here at ETH Zurich, which exhibits remarkable efficiency and precision. Its architecture features self- and cross-attention mechanisms, empowering it to make robust match predictions. By employing early pruning and confidence classifications, LightGlue efficiently filters out unmatchable points and terminates computations early, thus avoiding unnecessary processing. LightGlue, along with its training code, is made available under a permissive APACHE license, facilitating broader usage.</p> <p>In comparison to SuperGlue when combined with SuperPoint, LightGlue demonstrates superior performance in both accuracy and speed. It notably enhances the scores on the train, public, and private datasets while accomplishing these results in nearly half the time required by alternative methods.</p> <table> <thead> <tr> <th>Config</th> <th>Train</th> <th>Public</th> <th>Private</th> </tr> </thead> <tbody> <tr> <td>SP+SG</td> <td>0.643</td> <td>0.361</td> <td>0.438</td> </tr> <tr> <td>SP+LG</td> <td>0.650</td> <td>0.384</td> <td>0.461</td> </tr> </tbody> </table> <h1>Method</h1> <p>We developed a modular pipeline that can be called with various arguments, enabling us to try out different configurations and combine methods very easily. In our pipeline, we made heavy use of hloc, which we used as a starting point.</p> <h2>Image Retrieval</h2> <p>To avoid matching all image pairs of a scene in an exhaustive manner, we used NetVLAD to retrieve the top k images to construct our image pairs. We also tried out CosPlace but did not observe any notable improvements over NetVLAD. Depending on the configuration of each run, we either used <em>k=20</em>, <em>30</em> or <em>50</em> due to run time constraints. For our final submission, we used <em>k=30</em>.</p> <h2>Feature Extraction</h2> <p>For keypoint extraction, we combined and tried multiple alternatives. For all feature extractions, we experimented with different image sizes but finally settled on resizing the larger edge to 1600 as it provided the most robust scores:</p> <ul> <li>ALIKED: We played around with a few settings and finally chose to add it to our ensemble as it showed promising results on a few train scenes. We had to limit the number of keypoints to 2048 due to run-time limitations.</li> <li>DISK: DISK was the most promising replacement for SP. We tried a few different configurations and finally settled with the default using a max of 5000 keypoints.</li> <li>SIFT: Due to its rotation invariance and fast matching, adding sift to our ensemble turned out to boost performance, especially for heritage/dioscuri and heritage/cyprus.</li> <li>SP: SuperPoint was the best-performing features extractor in all our experiments, however, we did not choose it for our final submission because of its restrictive license.</li> </ul> <h2>Feature Matching</h2> <p>We used NN-ratio to match SIFT features. For the deep features such as DISK, ALIKED and SP, we trained LightGlue on the MegaDepth dataset.</p> <h2>Ensembles</h2> <p>The ensembles gave us the biggest boost in the score. It allowed us to run extraction and matching for different configurations and combine the matches of all configurations. This basically gives us the benefits of all used methods. The only drawback is the increased run-time and we thus had to decrease the number of retrievals. Adding SIFT was always a good option because it did not increase the run-time by much while helping to deal with rotations.</p> <h2>Structure-from-Motion</h2> <p>For the reconstruction, we used PixSfM and forced COLMAP to use shared camera parameters for some scenes.</p> <h3>Pixel-Perfect-SfM</h3> <p>We added PixSfM (after compiling a wheel for manylinux, following the build pipeline of pycolmap) as an additional refinement step to the reconstruction process. During our experiments, we noted that using PixSfM decreased the score on scenes with rotated images as the S2DNet features are not rotation invariant. We thus only used it if no rotations are found in the scene. Due to the large number of keypoints in our ensemble, we had to use the low memory configuration in all scenes, even on the very small ones.</p> <h3>Shared Camera Parameters</h3> <p>We noticed that most scenes have been taken with the same camera and therefore decided to force COLMAP to use the same camera for all images in a scene if all images have the same shape. This turned out to be especially valuable on the haiper scenes where COLMAP assigned multiple cameras.</p> <h2>Localizing Unregistered Images</h2> <p>Some images were not registered, even with a high number of matches to registered ones, possibly because the assumption of shared intrinsics was not always valid. We, therefore, introduced a post-processing step where we used the hloc toolbox to estimate the pose of unregistered images. Specifically, we checked if the camera of an unregistered image is already in the reconstruction database. If that was not the case, we would infer it from the exif data.</p> <h1>Other things tried</h1> <ul> <li>rotating images → We used an image orientation prediction model to correct for rotations. This worked well on the training set but reduced our score significantly upon submission.</li> <li>Inspired by last year's solutions, use cropping to focus matching on important regions between image pairs → Became infeasible as we would have a different set of keypoints for each pair of images used for matching.</li> <li>Other feature extractors and matcher such as a reimplementation of SP and dense matchers such as LoFTR, DKM → did not improve results or too slow, also unclear license for SP reimplementation.</li> <li>Estimated relative in-plane rotation pairwise from sift matches and then estimated the rotation for each image by propagating the rotation through the maximum spanning tree of pairwise matches. → Worked sometimes on Dioscuri but failed on other scenes.</li> <li>Resize for sfm did not help.</li> </ul> <h1>Acknowledgments</h1> <p>We would like to thank Philipp Lindenberger for his awesome guidance, tips, and support. We also want to give a huge credit to his novel matcher LightGlue. We also want to thank the <a href="https://cvg.ethz.ch" target="_blank">Computer Vision and Geometry Group, ETH Zurich</a> for the awesome project that started all this.</p> <h1>Links</h1> <ul> <li><a href="https://github.com/cvg/LightGlue" target="_blank">LightGlue Repo</a></li> <li><a href="https://arxiv.org/pdf/2306.13643.pdf" target="_blank">LightGlue Paper</a></li> <li><a href="https://github.com/veichta/IMC-2023" target="_blank">Solution Repo</a></li> <li><a href="https://www.kaggle.com/code/alexanderveicht/imc2023-from-repo" target="_blank">Kaggle Notebook</a></li> </ul> <h2>Per Scene Train Scores</h2> <h3>Heritage</h3> <table> <thead> <tr> <th>Features</th> <th>Matchers</th> <th>Cyprus</th> <th>Dioscuri</th> <th>Wall</th> <th>Overall</th> </tr> </thead> <tbody> <tr> <td>ALIKED</td> <td>LG</td> <td>0.850</td> <td>0.684</td> <td><strong>0.967</strong></td> <td><strong>0.833</strong></td> </tr> <tr> <td>DISK</td> <td>LG</td> <td>0.314</td> <td>0.592</td> <td>0.843</td> <td>0.583</td> </tr> <tr> <td>ALIKED+SIFT</td> <td>LG+NN</td> <td>0.991</td> <td>0.772</td> <td>0.436</td> <td>0.733</td> </tr> <tr> <td>DISK+SIFT</td> <td>LG+NN</td> <td>0.993</td> <td>0.624</td> <td>0.756</td> <td>0.791</td> </tr> <tr> <td>ALIKED2K+DISK</td> <td>LG+LG</td> <td>0.792</td> <td>0.712</td> <td>0.930</td> <td>0.811</td> </tr> <tr> <td>ALIKED2K+DISK+SIFT**</td> <td>LG+LG+NN</td> <td><strong>0.993</strong></td> <td><strong>0.802</strong></td> <td>0.595</td> <td>0.796</td> </tr> <tr> <td>ALIKED2K+DISK+SIFT</td> <td>LG(h)+LG(h)+NN</td> <td><strong>0.993</strong></td> <td><strong>0.802</strong></td> <td>0.595</td> <td>0.796</td> </tr> </tbody> </table> <h3>Haiper</h3> <table> <thead> <tr> <th>Features</th> <th>Matchers</th> <th>bike</th> <th>chairs</th> <th>fountain</th> <th>Overall</th> </tr> </thead> <tbody> <tr> <td>ALIKED</td> <td>LG</td> <td>0.431</td> <td>0.735</td> <td><strong>0.998</strong></td> <td>0.721</td> </tr> <tr> <td>DISK</td> <td>LG</td> <td><strong>0.926</strong></td> <td>0.799</td> <td><strong>0.998</strong></td> <td>0.908</td> </tr> <tr> <td>ALIKED+SIFT</td> <td>LG+NN</td> <td>0.579</td> <td>0.931</td> <td><strong>0.998</strong></td> <td>0.836</td> </tr> <tr> <td>DISK+SIFT</td> <td>LG+NN</td> <td>0.917</td> <td>0.929</td> <td><strong>0.998</strong></td> <td>0.948</td> </tr> <tr> <td>ALIKED2K+DISK</td> <td>LG+LG</td> <td>0.918</td> <td>0.812</td> <td><strong>0.998</strong></td> <td>0.909</td> </tr> <tr> <td>ALIKED2K+DISK+SIFT**</td> <td>LG+LG+NN</td> <td>0.922</td> <td>0.801</td> <td><strong>0.998</strong></td> <td>0.907</td> </tr> <tr> <td>ALIKED2K+DISK+SIFT</td> <td>LG(h)+LG(h)+NN</td> <td>0.920</td> <td>0.934</td> <td><strong>0.998</strong></td> <td><strong>0.951</strong></td> </tr> </tbody> </table> <h3>Urban</h3> <table> <thead> <tr> <th>Features</th> <th>Matchers</th> <th>kyiv-puppet-theater</th> <th>Overall</th> </tr> </thead> <tbody> <tr> <td>ALIKED</td> <td>LG</td> <td>0.735</td> <td>0.735</td> </tr> <tr> <td>DISK</td> <td>LG</td> <td>0.793</td> <td>0.793</td> </tr> <tr> <td>ALIKED+SIFT</td> <td>LG+NN</td> <td>0.215</td> <td>0.215</td> </tr> <tr> <td>DISK+SIFT</td> <td>LG+NN</td> <td>0.789</td> <td>0.789</td> </tr> <tr> <td>ALIKED2K+DISK</td> <td>LG+LG</td> <td>0.742</td> <td>0.742</td> </tr> <tr> <td>ALIKED2K+DISK+SIFT**</td> <td>LG+LG+NN</td> <td>0.806</td> <td>0.806</td> </tr> <tr> <td>ALIKED2K+DISK+SIFT</td> <td>LG(h)+LG(h)+NN</td> <td><strong>0.824</strong></td> <td><strong>0.824</strong></td> </tr> </tbody> </table>
Vesuvius Challenge - Ink Detection
8th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Vesuvius Challenge - Ink Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Team <a href="https://www.kaggle.com/renman" target="_blank">@renman</a> and <a href="https://www.kaggle.com/yoyobar" target="_blank">@yoyobar</a> </p> <h2>Acknowledgement:</h2> <p>First of all we want to thank Kaggle for organizing such a great competition. The host has been very helpful and responsive and made this competition such a great experience for us!</p> <h2>1. Summary of solution:</h2> <ul> <li>Ensemble of 5 Unet models with 3D seresnet101, 3D Resnet34, and Segformer-b3 as backbones</li> <li>Post-processing methods such as masking out edge pixels for inference and applying rot90 and "channels" TTA</li> <li>Our final two submissions including one with a fixed threshold (0.55) and the other with a dynamic threshold based on pixel percentile (top 3% pixels)</li> </ul> <h2>2. Dataset:</h2> <h3>Preprocessing:</h3> <ul> <li>Set maximum pixel value to 0.78 (i.e. if pixel value &gt; 0.78, set it to 0.78)</li> </ul> <h3>CV Strategies:</h3> <ul> <li>We tried two different CV strategies:<ul> <li>3-fold CV: Split into fragment 1,2,3</li> <li>4-fold CV: split fragment 2 into top and bottom half (“2a” and “2b”)</li></ul></li> </ul> <h3>Data Augmentation:</h3> <ul> <li>Image Resizing and Sampling:<ul> <li>We augment the original dataset with three different resolutions categories (1, 0.75x, 0.5x)</li></ul></li> <li>Compression in z dimension (rate=+-0.2)</li> <li>3d Rotation (rate=+-5°)</li> <li>Channel Dropout</li> <li>Rotate and Flip</li> <li>Albumentation:<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F461631%2F6f55b04646dc7cfdea4163f03aa4fa91%2FScreen%20Shot%202023-06-15%20at%208.43.22%20PM.png?generation=1686834309651934&amp;alt=media" alt=""></li> </ul> <h2>3. Models:</h2> <ul> <li><h3>3D SEResnet101:</h3> <ul> <li>Change to the architecture:<ul> <li>Add Squeeze and Excitation Layer and Dilated Convolution to the encoder</li> <li>Use BasicBlock instead of Bottleneck</li></ul></li> <li>No pretrained model is used. Model is trained from scratch with a 2 stage process:<ul> <li>The first stage is to train the model with image size of 96 which is easier to train</li> <li>The second stage is to continue training the model with bigger image size of 192 and 256</li></ul></li> <li>Fixed stride of 112</li> <li>Randomly pick 20 channels between 15 and 40 as input to the model:<ul> <li>At inference time, slide through channels 15 to 40 with a moving window of 20 and stride of 5 and take average (i.e. "channel" TTA)</li></ul></li> <li>BCE + Dice Loss</li> <li>Exclude areas outside binary mask<br> <br></li></ul></li> <li><h3>Segformer-b3:</h3> <ul> <li>Use Segformer-b3 with 3 channels input. Model pretrained on imagenet</li> <li>First method:<ul> <li>Select channels 25-36 (11 channels), split them into groups of [25-27, 28-30, 31-33, 34-36] and feed them into the model</li> <li>Merge the feature maps in the channel dimension with "Attention Pooling":<ul> <li>Refer to <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a>'s excellent elaborate in this thread: <a href="https://www.kaggle.com/competitions/vesuvius-challenge-ink-detection/discussion/407972" target="_blank">https://www.kaggle.com/competitions/vesuvius-challenge-ink-detection/discussion/407972</a></li></ul></li></ul></li> <li>Second Method:<ul> <li>Select channels 29-34 (6 channels). Split them into two groups of [29-31, 32-34]</li> <li>Feed each group into the same model and average the logit outputs</li></ul></li> <li>Image size 224</li> <li>BCE Loss</li> <li>Exclude areas outside binary mask<br> <br></li></ul></li> <li><h3>3D Resnet34:</h3> <ul> <li>Pick channel 22-34 (18 channels) as input to the model</li> <li>Image size 224 and 192</li> <li>BCE + Dice Loss</li> <li>Exclude areas outside binary mask</li></ul></li> </ul> <h2>4. Inference:</h2> <ul> <li>Fp16</li> <li>Ignore areas outside binary mask</li> <li>Rotation TTA (90 degree rotation)</li> <li>Channels TTA:<ul> <li>At inference time, slide through channels 15 to 40 with a moving window of 20 and stride of 5 and take average (3D SEResnet101 only)</li></ul></li> <li>Mask out edge pixels:<ul> <li>Only generate predictions in the “middle part” of the output</li></ul></li> <li>Thresholding:<ul> <li>One submission we use a fixed Threshold of 0.55</li> <li>The other submission we use a dynamic threshold strategy based on pixel percentile (top 3% pixels), i.e. sort the pixels and pick a threshold value such that top 3 percentile of pixels are selected</li></ul></li> </ul> <h2>5. Final Ensemble:</h2> <ul> <li>The final ensemble consists of a weighted average of the 5 models below:</li> </ul> <table> <thead> <tr> <th>Models</th> <th>Image size</th> <th>Kfold</th> <th>Output size</th> <th>Stride</th> <th>Public LB</th> <th>Private LB</th> </tr> </thead> <tbody> <tr> <td>3D Seresnet101</td> <td>256</td> <td>Single Fold validated on fragment "2a"</td> <td>192</td> <td>192 // 4</td> <td>0.76</td> <td>0.61</td> </tr> <tr> <td>Segformer-b3 with attention pooling</td> <td>224</td> <td>4 folds</td> <td>160</td> <td>160 // 2</td> <td>0.7</td> <td>0.6</td> </tr> <tr> <td>Segformer-b3 with averaged logit</td> <td>224</td> <td>3 folds</td> <td>176</td> <td>176 // 2</td> <td>0.66</td> <td>0.56</td> </tr> <tr> <td>3D Resnet34</td> <td>224</td> <td>4 folds</td> <td>192</td> <td>192 // 2</td> <td>0.66</td> <td>0.58</td> </tr> <tr> <td>3D Resnet34</td> <td>192</td> <td>3 folds</td> <td>160</td> <td>160 // 2</td> <td>-</td> <td>-</td> </tr> </tbody> </table> <ul> <li>Our final 2 submissions including one with a fixed threshold (public LB: 0.79, private LB: 0.65) and one with a pixel percentile threshold (public LB: 0.79, private LB: 0.63)</li> </ul> <h2>6. Methods tried but did not work:</h2> <ul> <li>Tried to add Convolutional Block Attention Module (CBAM) for the decoder of 3D Resnet but CV dropped<ul> <li>Paper to CBAM: <a href="https://arxiv.org/abs/1807.06521" target="_blank">https://arxiv.org/abs/1807.06521</a></li></ul></li> <li>Use IR image as additional segmentation head but it did not lead to any improvement in CV</li> <li>Mixup and label noise for data augmentation</li> </ul> <p>Update: The code and detailed instructions for replicating our models can be found in the following publicly available github repo:</p> <p><a href="https://github.com/flyyufelix/vesuvius_challenge_8th_place_solution" target="_blank">https://github.com/flyyufelix/vesuvius_challenge_8th_place_solution</a></p>
BirdCLEF 2023
24th place solution - pre-training & single model (5 folds ensemble with ONNX)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2023 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hello everyone!</p> <p>First, thanks to the Cornell Lab of Ornithology and the Kaggle team for hosting the competition. Also, congratulations to all winners and participants.</p> <h2>TL;DR</h2> <p>I joined about 2 weeks before the competition ended, and I had about 7 days to participate due to my busy daily life. In the meantime, It's a great opportunity for me to challenge myself to build a model in a short time. So, I had to focus on which recipes would work best within minimum trials.</p> <p>Thankfully, There're many good quality resources like top solutions from the last competitions. I started to follow up on the solution and figured out the working recipes in general. (past BirdCLEF 2020 experience helps a lot too)</p> <p>As a result of the consequence and some lucks, I can build a decent model I guess : )</p> <h2>Architecture</h2> <p>Here's the pipeline.</p> <ol> <li>pre-train on 2020, 2021, 2022, xeno-canto datasets.</li> <li>fine-tune on 2023 dataset (based on the pre-trained weight).<ul> <li>minor classes (&lt;= 5 samples) are included in all folds</li></ul></li> </ol> <p>I applied the same training recipes (e.g. augmentation, loss functions, …) each step.</p> <h3>CV</h3> <p>(although based on my few experiments) my cv score and LB/PB are kinda correlated.</p> <table> <thead> <tr> <th>Exp</th> <th>CV</th> <th>LB</th> <th>PB</th> <th>Note</th> </tr> </thead> <tbody> <tr> <td><code>effnetb0</code></td> <td>0.7720</td> <td>0.82438</td> <td>0.73641</td> <td>multiple losses, 5 folds</td> </tr> <tr> <td><code>effnetb0</code></td> <td>0.7693</td> <td>0.82402</td> <td>0.73604</td> <td>clipwise loss, 5 folds</td> </tr> <tr> <td><code>eca_nfnet_l0</code></td> <td>0.7753</td> <td>0.80731</td> <td>0.71845</td> <td>clipwise loss, single fold</td> </tr> </tbody> </table> <h3>Model</h3> <p>I used SED architecture with the <code>efficientnet_b0</code> backbone. Also, I tested <code>eca_nfnet_l0</code> backbone, and it has a better cv score, but I can't use it due to the latency.</p> <h3>Training recipe</h3> <ul> <li>[<strong>Important</strong>] pre-training</li> <li>[<strong>Important</strong>] augmentations<ul> <li>waveform-level<ul> <li>[Important] or mixup on a raw waveform</li> <li>gaussian &amp; uniform noise</li> <li>pitch shift</li> <li>[Important] background noise</li></ul></li> <li>spectrogram-level<ul> <li>spec augment</li></ul></li></ul></li> <li>log-mel spectrogram<ul> <li>n_fft &amp; window size 1024, hop size 320, min/max freq 20/14000, num_mels 256, top_db 80. (actually, I wanted n_fft with 2048, but I set it to 1024 by my mistake)</li></ul></li> <li>trained on 5 secs clips</li> <li>stratified k fold (5 folds, on primary_label)</li> <li>label smoothing 0.1</li> <li>multiple losses (from <a href="https://www.kaggle.com/competitions/birdclef-2021/discussion/243351" target="_blank">birdcelf 2021 top 5</a>)<ul> <li>bce loss on clip-wise output w/ weight 1.0</li> <li>bce loss on max of segment-wise outputs w/ weight 0.5</li></ul></li> <li>fp32</li> <li>AdamW + cosine annealing (w/o warmup)<ul> <li>50 epochs (usually converged between 40 ~ 50)</li></ul></li> </ul> <h2>Inference</h2> <p>I can ensemble up to 4 models with Pytorch (it took nearly 2 hrs). To mix more models, I utilized ONNX and did graph optimization, and it makes one more model to be ensembled! Finally, I can ensemble 5 models (single model 5 folds). Also, to utilize the full CPU, I do some multi-processing stuff.</p> <h2>Not worked (perhaps I might be wrong)</h2> <ul> <li>secondary label (both hard label, soft label (e.g. 0.3, 0.5))</li> <li>focal loss</li> <li>longer clips (e.g. 15s)</li> <li>post-processings (proposed in the BirdCLEF 2021, and 2022 competitions)<ul> <li>aggregate the probs of the previous and next segments.</li> <li>if there's a bird above the threshold, multiply constants on all segments of the bird.)</li></ul></li> </ul> <p>I hope this could help!</p> <p>Thanks : )</p>
Google - American Sign Language Fingerspelling Recognition
19th Place Solution - Jasper models
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google - American Sign Language Fingerspelling Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>My solution is quite simple. I wasn't going to write about it, but there are no Jasper-like solutions in discussions yet :)</p> <h1>Preprocessing</h1> <p>Preprocessing was almost the same with <a href="https://www.kaggle.com/competitions/asl-signs/discussion/406306" target="_blank">previous competition</a>. But deleted mixup and replace augmentation. And add more complex time augmentations.</p> <h1>Loss</h1> <p>I only used CTC loss.</p> <h1>Data split</h1> <p>Random split <strong>not</strong> by user for 20 folds grouping by data length.<br> Deleted obviously bad samples:</p> <pre><code>supplemental_metadata_df = supplemental_metadata_df[~((supplemental_metadata_df[] &lt; ) &amp; (supplemental_metadata_df[] &gt; ))] train_df = train_df[~((train_df[] &lt; ) &amp; (train_df[] &gt; ))] </code></pre> <p><br> Used all supplemental data in train.</p> <p>Also tried to create second loader with pseudo labels on bad predicted samples. Or delete bad predicted samples. Used both loaders while training:</p> <pre><code>loader_to_use = epoch % == : loader_to_use = </code></pre> <p>But it didn't help much.</p> <h1>Model</h1> <p>First I tried simple quartznet, but with all strides set to 1.<br> Then I added some features step by step and trained each experiment from previous best checkpoint.</p> <h3>Same for all runs:</h3> <ul> <li>Optimizer: LookAheadAdamW</li> <li>Scheduler: Onecycle</li> <li>Batch size: 80</li> </ul> <h3>List of experiments:</h3> <ul> <li>Simple 5x5 quartznet with stride = 1. [lr = 8e-3] [epochs=200] CV:0.8035</li> <li>Simple 5x5 quartznet with stride = 1. [lr = 7e-3] [epochs=150] CV: 0.8106</li> <li>Previous + SE blocks. [lr = 6e-3] [epochs=171] CV: 0.8160</li> <li>Previous + Deep supervision (DSV) outputs for train loss. [lr = 5.8e-3] [epochs=175] CV: 0.8211</li> <li>Previous without DSV + higher aug probabilities. Note: score worse but it needed to train one checkpoint without DSV for better next train results. [lr = 6e-3] [epochs=180] CV: 0.8177</li> <li>Previous + DSV + increased dropout. [lr = 6.1e-3] [epochs=200] CV: 0.8237</li> <li>Previous - DSV + lower dropout. [lr = 6e-3] [epochs=173] CV: 0.8188</li> <li>Previous + Hyper Column (HYP) + 2 more blocks with 5 repeats. [lr=5.8e-3] [epochs=180] CV: 0.8252</li> <li>Previous + masked 1conv + masked SE + 2D DepthWise CNN layer before classifier + more dropout. [lr=5.8e-3] [epochs=200] CV: 0.8294</li> <li>Previous + lstm after 2D Conv. [lr=1.75e-3] [epochs=125] CV: 0.8301</li> </ul> <h1>What didn't work for me:</h1> <ul> <li>Conformer models. I tried it but I guess without enough effort.</li> <li>Adding attention or conformer blocks to Jasper models.</li> </ul> <p>When I used raw conformer and it has very low score. But when I changed input 2D conv to 1D conv or when I just deleted it - it was much better, but still little worse than my Jasper models.</p> <h1>What I should have done, but didn't:</h1> <ul> <li>Train more epochs</li> <li>Try better with conformer or squeezeformer</li> <li>Add post processing</li> <li>Try seq2seq</li> <li>Fixed length of input</li> </ul> <p>Thanks to Kaggle for this great competition. Thanks to all top participants who shared their awesome solutions. <br> Congratulations to all winners! </p>
Google Research - Identify Contrails to Reduce Global Warming
6th place solution (CV: 0.7069 / Public LB: 0.706 / Private LB: 0.713)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Research - Identify Contrails to Reduce Global Warming <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, we would like to thank the organizers of the competition and the Kaggle team for the competition hosting and all the participants who have shared their knowledge so generously. </p> <h1>Summary</h1> <ul> <li>2 stage pipeline: classification and segmentation</li> <li>ensemble of 2.5D model using 5 or 7 frame and 11ch input 2D model</li> <li>soft label using individual label</li> <li>percentile threshold</li> </ul> <h1>Pipeline</h1> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1745801%2Fdeb33f0f5a0349334ed541ec39c2e353%2F2023-08-10%2016.57.28.png?generation=1691664174212060&amp;alt=media" alt=""></p> <p>This pipeline has been used frequently in past semantic segmentation competitions. <br> The advantage of this pipeline is that it creates two models, one trained on pos_only and the other on all data, thereby increasing the ensemble effect. <br> It also saves inference time by allowing more models to be assigned to images for which screening has determined that a mask is present. </p> <h1>Models Detail</h1> <h2>Common Settings</h2> <p>Both 2.5D and 2D models were trained for both hard and soft labels. <br> The soft label is created as follows</p> <pre><code>label = self.np_load( os.path.join() ).astype(np.float32) label = np.clip((label * ).(-) / label.shape[-], , ) </code></pre> <p>All models were trained by all the data in train folder and validated by the one in validation folder. </p> <h2>2.5D Model</h2> <p>We used a modified <a href="https://www.kaggle.com/competitions/vesuvius-challenge-ink-detection/discussion/417255" target="_blank">2.5D model used in Vesuvius Challenge</a>.<br> The main changes are as follows</p> <ul> <li>Apply 3D CNN to feature maps of all resolutions</li> <li>3D For the output of the 3D CNN, feature maps corresponding to the target frame are extracted and given to UNet </li> <li>In the binary classification model, the 3DCNN is applied to the smallest resolution feature map and input to the FC layer in the same way. </li> </ul> <p>Backbones used were resnetrs101, resnest101e, swin_base_patch4_window12, swinv2_base_window16, convnext_base, convnext_large, etc. <br> The input frame is set to 5 or 7 frames, with the target frame in the middle. <br> Input resolution of 384 or 512 are used for the model. <br> It was trained using <code>BCE+Dice*0.25</code> for the pos only model and <code>BCE</code> for the all data model. <br> Also, different augmentation is used for each of the classification model, all data seg model, and pos only seg model. (possibly not optimized and therefore not appropriate). </p> <pre><code> albu.Flip(p=), albu.RandomRotate90(p=), albu.ShiftScaleRotate(p=, scale_limit=, shift_limit=), albu.GridDistortion(num_steps=, distort_limit=, p=), albu.CoarseDropout(max_height=, max_width=, fill_value=, mask_fill_value=, p=,) albu.ShiftScaleRotate(p=, scale_limit=, shift_limit=), albu.Flip(p=), albu.RandomRotate90(p=), albu.ShiftScaleRotate(p=, scale_limit=, shift_limit=, rotate_limit=), albu.Rotate(limit=, p=), </code></pre> <ul> <li>EMA(decay=0.998)</li> </ul> <h2>2D Model</h2> <p>For 2D models, we simply used Unet implemented in Segmentation Models Pytorch. The backbones used were res2net50d, regnetz_d8, regnetz_d32, and regnetz_e8.</p> <p>Characteristically, the 2D models were trained on 11 channels, i.e. band15 - band14, band14 - band11 and all 9 bands. Learning with this input does not perform well in short epochs (e.g. 20~30 epochs), but in long epochs (specifically 100 epochs), it obtained better validation scores than with false color input.</p> <p>The 11ch models also had one curious point. They had poor public score but good private score. I believe this is what pushed us to the prize zone.</p> <p>Other training settings are as follows(shared by all data seg model and pos only seg one):</p> <ul> <li>Input<ul> <li>size: 384x384</li> <li>each channels are standardized by subtracting the global mean and dividing by the global variance of the channel</li></ul></li> <li>Loss: <code>0.9*BCE + 0.1*Dice</code></li> <li>Data Augmentation</li> </ul> <pre><code>albu.HorizontalFlip(p=), albu.VerticalFlip(p=), albu.ShiftScaleRotate(p=, rotate_limit=) albu.RandomResizedCrop(p: , scale=[, ], height=, width=) </code></pre> <ul> <li>EMA(decay=0.9999)</li> </ul> <h1>Postprocessing</h1> <p>From the time taken to inference, the percentage of empty masks in the entire test was found to be about the same as in validation set. (about 30%). <br> Therefore, we used validation set to optimize the percentile threshold for both the classification part and segmentation part. <br> As a result, the percentile threshold is better than the optimized fixed threshold. </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1745801%2F48a3c5b9e9ed108faaed1514dac63485%2F2023-08-10%2017.20.42.png?generation=1691664304846451&amp;alt=media" alt=""></p> <h1>Not Works</h1> <ul> <li>pseudo label・mean teacher</li> <li>2D pretrain -&gt; 2.5D finetuning<ul> <li>unstable</li></ul></li> <li>more augmentations</li> <li>mixup・cutmix・label smoothing</li> <li>TTA</li> <li>efficientnet</li> </ul> <h1>Training Code and Inference Notebook</h1> <p>We have released the training code and the inference notebook for the best submission.</p> <ul> <li>training code (2.5D model part): <a href="https://github.com/tattaka/google-research-identify-contrails-reduce-global-warming" target="_blank">https://github.com/tattaka/google-research-identify-contrails-reduce-global-warming</a></li> <li>training code (2D model part): <a href="https://github.com/tawatawara/kaggle-google-research-identify-contrails-reduce-global-warming" target="_blank">https://github.com/tawatawara/kaggle-google-research-identify-contrails-reduce-global-warming</a></li> <li>inference code: <a href="https://www.kaggle.com/code/tattaka/contrail-submission-ensemble?scriptVersionId=139455030" target="_blank">https://www.kaggle.com/code/tattaka/contrail-submission-ensemble?scriptVersionId=139455030</a></li> </ul>
Google - American Sign Language Fingerspelling Recognition
126th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google - American Sign Language Fingerspelling Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><a href="https://www.kaggle.com/code/markwijkhuizen/aslfr-inference-python3-7-tpu" target="_blank">This solution</a> is an improved version of <a href="https://www.kaggle.com/code/markwijkhuizen/aslfr-transformer-training-inference" target="_blank">my public notebook</a> with the following modifications:</p> <ul> <li>Running on a TPU to allow for faster training and experimentation</li> <li>Increased number of input frames 128 → 288</li> <li>Batch size 64 → 512</li> <li>Units encoder 384 → 288</li> <li>Units decoder 256 → 128</li> </ul> <p>For an in depth explanation of the approach I refer to the public notebook code and comment section.</p> <p>The expect the performance boost in my private notebook is mostly due to the decreased model capacity and increased batch size, this should reduce overfitting.</p>
Google - American Sign Language Fingerspelling Recognition
Custom MovinetClassifier Concept
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google - American Sign Language Fingerspelling Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F396181%2F4663191986e89b81a51041daba2605e8%2Fanimation_23842.gif?generation=1692951869820889&amp;alt=media" alt="hand_animation_23842"></p> <p>Above illustrates transforming the hand landmarks (for row 23842 phrase 2 in train) in tensorflow preprocessing layers into a "clip" to be used for inference with a custom classifier fine tuned for the 59 prediction characters.<br> MoViNet models for video classification have small streaming models that can be trained and inferred on CPU and converted to TF Lite. </p> <p><a href="https://github.com/tensorflow/models/tree/master/official/projects/movinet" target="_blank">MoViNet model from TensorFlow Models</a><br> custom classifier MovinetClassifier fine tune with pretrained weights e.g., movinet_a0_stream (trained on Kinetics 600) <br> input shape 1 x 1 x 172 x 172<br> checkpoint ~15 MB trained model ~30 MB TF Lite ~10 MB</p> <p>Convert to TF SavedModel. Then the SavedModel can be converted to TF Lite using the TFLiteConverter<br> <code>converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)</code><br> <code>tflite_model = converter.convert()</code></p> <p><a href="https://arxiv.org/abs/2103.11511" target="_blank">MoViNets: Mobile Video Networks for Efficient Video Recognition</a><br> In the paper, multiple-class labels per video were used for Charades. <br> Or per frame or selected frame inference was a possibility then post process with tensorflow unique_with_counts <br> and considerations for double numbers or letters.</p> <p>This was not finished for the competition, but adding here for interest and awareness of these models. <br> And a bit of fun.<br> It could be a next gen Sign Language uses video content with mediapipe markers and this would fit nicely.</p>
Google Research - Identify Contrails to Reduce Global Warming
5th place solution (best single model, private LB 0.71443)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Research - Identify Contrails to Reduce Global Warming <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to the organizers for the great competition. Also thanks to my teammates: <a href="https://www.kaggle.com/imakarov" target="_blank">Ilya Makarov</a>. I had a lot of fun and learned a lot.<br> I'd like to share our solution and what I learned from this competition. I hope it will be helpful for you.</p> <h2>Solution overview of my part</h2> <h4>Models</h4> <p>I use 5 models as follows:</p> <ul> <li>Single frame model 1: EfficientNetV2L + UNet, 512x512, crop to 480x480 for training.</li> <li>Single frame model 2: EfficientNetV2L + UNet, 768x768, crop to 512x512 for training.</li> <li>Multi frames model 1: EfficientNetV2L + customized 3D UNet 1, 256x256, use all 8 frames.</li> <li>Multi frames model 2: EfficientNetV2L + customized 3D UNet 2, 512x512, crop to 480x480 for training, use 5 frames only.</li> <li>Multi frames model 3: EfficientNetV2L + customized 3D UNet 3, 512x512, crop to 480x480 for training, use 5 frames only.<br> <strong>(<a href="https://www.kaggle.com/wuliaokaola/icrgw-submission-single-model-lb-0-714-5th-place" target="_blank">Best single model, late submission, private LB: 0.71443, public LB 0.70803</a>)</strong></li> </ul> <p>About the customized 3D UNet<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2006644%2F2130a20eb0d042ac9d48d1ec2d7d88f9%2Funet3d_.png?generation=1691656552129914&amp;alt=media" alt=""></p> <ul> <li>Apply the same encoder(backbone) to each frame</li> <li>Combine the encoder outputs by Conv3D at each level. ConvLSTM2D also works, but it's slower.</li> <li>Use UNet as decoder to get the final output.</li> </ul> <h4>Augmentation</h4> <ul> <li>Filp LR, Filp UD, Rotate 90 (for TTA8)</li> <li>Noise by channel, noise by pixel</li> <li>Random dropout frame (multi frames model only)</li> <li>Random crop</li> </ul> <p>At the early stage of the competition, I noticed that if I don't use flip and rot90, the model will be overfitting quickly. But if I use the flip and rot90, the result became worse. I was confused for a long time. Finally, I found the reason. As mentioned in <a href="https://www.kaggle.com/competitions/google-research-identify-contrails-reduce-global-warming/discussion/430479#2382723" target="_blank">other discussions</a>, The image and the mask are not aligned. The mask is a littel bit shifted to the bottom right. So if we use flip or rot90, we need to align them. The magic number for me is: x = 0.408, y = 0.453. </p> <p>Tried but not worked:</p> <ul> <li>Random rotate</li> <li>CoarseDropout</li> <li>Dropout by channel</li> </ul> <h4>Target and loss function</h4> <p>I used 2 targets:</p> <ul> <li>Target 1: Sigmoid to 1 channel. weight: 75% (human_pixel_masks.npy)</li> <li>Target 2: Softmax to 5 channel. weight: 25% (human_individual_masks.npy. 0 for not contrails, 1 for 25% of labelers, 2 for 50% of labelers, 3 for 75% of labelers, 4 for 100% of labelers) </li> </ul> <p>The loss function is weighted binary cross entropy + dice loss. <br> And use Lion for optimizer. (Lion: <a href="https://github.com/keras-team/keras/blob/v2.13.1/keras/optimizers/lion.py" target="_blank">https://github.com/keras-team/keras/blob/v2.13.1/keras/optimizers/lion.py</a>)</p> <h4>Prediction</h4> <ul> <li>Ensemble by 5 models with TTA8 (flip LR, flip UD, rotate 90)<br> The TTA8 is very important. It can improve the score by 0.005~0.01.</li> </ul> <p>The best result is public LB: 0.71243, private LB: 0.71756. But unfortunately, we didn't choice it as our final submission. </p> <h4>Other experiments</h4> <ul> <li>I tried to use all bands, which can give a improvement of 0.000x. But it make the submission too complex, and take too much time. So I didn't use it in the final submission.</li> <li>UNet + Tranformer (256, 128: UNet, 64, 32: Transformer). Using the trasformer to get the information of other frames. It works but not better than UNet. Maybe because the transformer is not pretrained.</li> </ul> <h4>Update</h4> <ul> <li>20230811 Added a late submission score of the best single model.</li> </ul>
Google Research - Identify Contrails to Reduce Global Warming
105th Place Solution for the Google Research - Identify Contrails to Reduce Global Warming Competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Research - Identify Contrails to Reduce Global Warming <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>1. Context section</h1> <ul> <li><a href="https://www.kaggle.com/competitions/google-research-identify-contrails-reduce-global-warming/overview" target="_blank">Business context</a></li> <li><a href="https://www.kaggle.com/competitions/google-research-identify-contrails-reduce-global-warming/data" target="_blank">Data context</a></li> </ul> <h1>2. Overview of the Approach</h1> <p>In this competition, the motivation is to contribute to the improvement of contrail prediction models, which are used to predict the formation of contrails generated by aircraft engine exhaust. Contrails are clouds of ice crystals with a line shape. They form when aircraft fly through super humid regions in the atmosphere and contribute greatly to global warming by trapping the heat in the atmosphere. Our goal is to validate the models that use satellite imagery as input data and then provide airlines with more accurate ways to mitigate the problem of climate change by avoiding the formation of contrails.</p> <p>In this study, we addressed the contrail detection challenge using geostationary satellite imagery from the GOES-16 ABI. Our approach involved data preprocessing for standardization and ash RGB image creation, contrail annotation with IoU-based object detection, and model training with EfficientNet and Dice coefficient loss in a K-Fold cross-validation setup. A distinctive feature was threshold selection for image masking, with a 0.52 threshold derived from the fifth image's Dice coefficient distribution. Validation included an error metric assessing labeling accuracy. Importantly, the Dice coefficient distributions from training on the fifth image alone and additional-image training closely aligned, confirming the chosen threshold's effectiveness.</p> <h1>3. Details of the submission</h1> <h2>3.1 Data Exploration</h2> <p>The data used in this competition are geostationary satellite images, sourced from the GOES-16 Advanced Baseline Imager (ABI). The images are provided as a sequence of 10-minute interval snapshots, where each sequence consists of eight images, with labeling applied only to the fifth image. Each dataset entry is represented by a unique record_id and contains precisely one labeled frame. The training dataset offers both individual label annotations and aggregated sound truth annotations, while the validation data only contains the latter.</p> <h2>3.2 Data Preprocessing</h2> <h3>3.2.1 Image Standardization</h3> <p>Data preprocessing plays a crucial role in this competition. The provided data consists of eight different spectral bands, and the pixel values are not in the standard 0-255 range. To make the satellite data suitable for our models, we first normalize and transform them using the following two steps:</p> <ul> <li>Map the pixel values of each spectral band to the standard range of 0 to 255 to create visualizable images;</li> <li>Create an ash RGB image by combining these eight bands, forming a three-channel color image.</li> </ul> <h3>3.2.2 Contrail Annotation</h3> <p>Next, we use the Intersection over Union (IoU) method to locate and annotate the contrails in the images. Our steps are as follows:</p> <ul> <li>Utilize object detection models to detect contrails within the ash RGB images. Outline them using bounding boxes;</li> <li>For potentially overlapping boxes, calculate the IoU to determine the degree of overlap between the boxes;</li> <li>Based on the calculation, merge smaller boxes with significant overlap into larger bounding boxes to enhance annotation accuracy and readability.</li> </ul> <h2>3.3 Model Training</h2> <h3>3.3.1 Random Crop and Image Selection</h3> <p>Random cropping is implemented as a form of data augmentation. We select only the fifth image from each sequence for training, as it contains the binary mask information (0 to 1). </p> <p>We applied a total of 25 different methods of cropping and the four labeled green are the most effective ones: <br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2497266%2F782ddf910435c2bca436c491cfae2869%2Fcropping%20methods.png?generation=1692827919871994&amp;alt=media" alt="cropping methods"></p> <p>The following are the training results of each method: <br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2497266%2Ff6aba1d97ceee02ef0296aebd4ee3e82%2Fcropping%20results.png?generation=1692827970836162&amp;alt=media" alt="cropping results"></p> <p>Some of the blank data is due to the fact that during training, we initially didn’t head in the right direction, so the experiment didn’t continue on those methods. In the end, we opted for the seventh method: Randomized square cropping with at least one complete contrail and the size of the cropped image is always greater than 128 pixels. The cropped images could be proportionally scaled as needed. For instance, we could expand them to 512 pixels by random cropping images greater than 256 pixels and then resizing them. Likewise, if we wanted to increase the image size to 1024 pixels, we followed a similar process by random cropping images larger than or equal to 512 pixels and resizing them to 1024 pixels. </p> <h3>3.3.2 EfficientNet with K-Fold Validation and Dice Coefficient</h3> <p>The model is trained using the EfficientNet architecture. The K-Fold cross-validation is employed to assess the performance of the model across different subsets of the training dataset and to ensure the robustness and generalization of the model. For loss calculation, the Dice coefficient function is used, which is useful for segmentation tasks.</p> <h3>3.3.3 Threshold Selection for Masking</h3> <p>After training the model with all the fifth images from each sequence, the next step is to apply the trained model to the rest of the images in each sequence (the first to fourth and sixth to eighth) to create the masks. The most creative part of our solution is to determine an appropriate threshold (or confidence) within the range from 0 to 1 for masking. This threshold is based on the distribution of the fifth image of each sequence. The following are the steps in detail:</p> <ul> <li>Apply the trained EfficientNet models to the first to fourth and sixth to eighth images in each sequence;</li> <li>Generate binary probability (ranging from 0 to 1) based on the model’s predictions;</li> <li>Determine an optimal threshold within the 0-1 range by analyzing the distribution of the fifth image;</li> <li>Apply the selected threshold to the rest of the images (the first to fourth and sixth to eighth).</li> </ul> <p>This approach leverages the power of our EfficientNet models and ensures that the models can better identify and retain the relevant images. </p> <p>The training results are illustrated in the following two figures. The left figure displays the distribution of Dice coefficients obtained when training exclusively with the fifth image of each sequence. Conversely, the right figure showcases the distribution of Dice coefficients when training with additional images, specifically the first to fourth and sixth to eighth images within each sequence. Remarkably, these two distributions exhibit a remarkable similarity, enabling us to determine a suitable threshold based on the distribution derived from the fifth image.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2497266%2F38af041ab03c025ae9fb897aaee999c3%2FDice%20Coefficient%20Distribution%20.png?generation=1692828079636644&amp;alt=media" alt="dice coefficient distribution"><br> To assess the accuracy of our approach, we introduced an error metric. If the fifth image was not correctly labeled, it was counted as an error. Subsequently, we calculated the proportion of results that contained errors. Our statistical analysis revealed that a Dice coefficient exceeding 0.5275080198049544 (approximately equal to 0.52 in our case) reliably indicated utility. Therefore, the threshold was set at 0.52, ensuring that the chosen threshold effectively distinguishes valuable information in the images. </p> <h1>4. Sources</h1> <ul> <li><a href="https://ai.googleblog.com/2019/05/efficientnet-improving-accuracy-and.html" target="_blank">EfficientNet: Improving Accuracy and Efficiency through AutoML and Model Scaling</a></li> <li><a href="https://eumetrain.org/sites/default/files/2020-05/RGB_recipes.pdf" target="_blank">Compilation of RGB Recipes</a></li> </ul>
ICR - Identifying Age-Related Conditions
103rd Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to the organizers for this competition, and congratulations to the final winners. </p> <p>I actually stopped working on this competition a few months ago due to the unreliable cv-lb relationship. This decision, in hindsight, was certainly the reason I got on the lucky side of the shake-up. My solution was quite simple and used a weighted ensemble of three boosting models. Here is a <a href="https://www.kaggle.com/code/ryanbarretto/103rd-place-solution" target="_blank">link</a> to the code. </p>
ICR - Identifying Age-Related Conditions
7th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>After doing few submission in initial part of the Competition, I didn't work on it, as heavy shakeup was expected.<br> But its a positive shakeup for me now (with a solo gold) 😅.</p> <p>Here my submission details:</p> <ul> <li>Fill Nan data with 0</li> <li>5 fold Multi Label Stratified using Greeks Values</li> <li>EJ was categorical so used Label Encoding</li> <li>Also Label Encoding for Beta, Gamma and Delta</li> <li>Used MultiClass CatBoost Classifier for all the models</li> </ul> <p>Saw that Beta, Gamma and Delta have very high predictive capability, but were given only for training data, so used all the other features to predict encoded Beta, Gamma and Delta using 5fold strategy, and used those features along with other given features to predict Multi Class Alpha. Then converted the Alpha probs to binary probs by adding B, D and G prob to predict class 1 and using A prob to predict class 0</p> <p>Thats it.</p> <p>Thank You</p> <p>PS <a href="https://www.kaggle.com/code/manthanbhagat/simple-baseline-add-greeks-features/notebook" target="_blank">here my submission code</a></p>
ICR - Identifying Age-Related Conditions
Naive undersampler for the gold medal
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This was a strange competition with a huge leaderboard shakeup. This is my solution, which would have resulted in a gold medal had I chosen <strong>a random seed of 42 (public score 0.22408, private score 0.3669)</strong> rather than <strong>a random seed of 19 (public score 0.20461, private score 0.41544)</strong> which resulted in a bronze medal.</p> <pre><code> pathlib Path catboost cb numpy np pandas pd imblearn.under_sampling RandomUnderSampler sklearn.model_selection StratifiedKFold, train_test_split data_path = Path() seed = (): y_pred = np.clip(y_pred, , - ) n0, n1 = np.bincount(y_true.astype()) w0 = / (n0 / (y_true)) w1 = / (n1 / (y_true)) l0 = -w0 / n0 * np.(np.where(y_true == , , ) * np.log( - y_pred)) l1 = -w1 / n1 * np.(np.where(y_true != , , ) * np.log(y_pred)) (l0 + l1) / (w0 + w1) train_df = pd.read_csv(data_path / ) test_df = pd.read_csv(data_path / ) greeks_df = pd.read_csv(data_path / ) feature_cols = train_df.columns.tolist()[:-] train_df, val_df = train_test_split( train_df, test_size=, stratify=train_df[], random_state= ) greeks_df = greeks_df.loc[train_df.index] train_df = train_df.reset_index(drop=) greeks_df = greeks_df.reset_index(drop=) train_df.drop(columns=[], inplace=) under_sampler = RandomUnderSampler(random_state=seed) train_df, _ = under_sampler.fit_resample(train_df, train_df[]) oof = np.zeros((train_df)) skf = StratifiedKFold(n_splits=, shuffle=, random_state=seed) test_probs = [] val_preds = [] fold = train_idx, val_idx skf.split(train_df, train_df[]): (.center(, )) fold += X_train, y_train = ( train_df.loc[train_idx, feature_cols], train_df.loc[train_idx, ], ) X_val, y_val = ( train_df.loc[val_idx, feature_cols], train_df.loc[val_idx, ], ) params = { : , : , : , : seed, } model = cb.CatBoostClassifier(**params) model.fit( X_train, y_train, eval_set=[(X_val, y_val)], cat_features=[], verbose= ) oof[val_idx] = model.predict_proba(X_val)[:, ] val_preds.append(model.predict_proba(val_df[feature_cols])[:, ]) test_preds = model.predict_proba(test_df.iloc[:, :]) test_probs.append(test_preds) () () test_probs = np.mean(test_probs, axis=) sub_df = pd.DataFrame( {: test_df.Id, : test_probs[:, ], : test_probs[:, ]} ) sub_df.to_csv(, index=) </code></pre>
ICR - Identifying Age-Related Conditions
30th Place Solution for the ICR - Identifying Age-Related Conditions Competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>On the competition's closing day, I was surprised to see a message from my friend and, upon checking the leaderboard, I found that I had come in 30th place.</p> <p>After submitting a few times early in the competition, I participated in HuBMAP, so I didn't expect to win a medal.</p> <h3><strong>Context</strong></h3> <ul> <li>Business context:&nbsp;<a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/overview" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions/overview</a></li> <li>Data context:&nbsp;<a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data</a></li> </ul> <h2>Solution</h2> <p>I haven't done anything special.</p> <p>I used this excellent notebook: <a href="https://www.kaggle.com/code/datafan07/icr-simple-eda-baseline" target="_blank">https://www.kaggle.com/code/datafan07/icr-simple-eda-baseline</a> and I am grateful to <a href="https://www.kaggle.com/datafan07" target="_blank">https://www.kaggle.com/datafan07</a>.</p> <p>The private score of this notebook was 0.37816, which is equivalent to a silver medal-worthy score.</p> <p>I only made a modification in the stratified k-fold section of this notebook to perform a split that also considered EJ.</p> <p>After reading this discussion (<a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/411632)" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/411632)</a>, I realized that gender should also be considered during fold splitting.</p> <pre><code>greeks = pd.merge(train[[, ]], greeks, on=) train_index,val_index skf.split(train, greeks.iloc[:,:-]): </code></pre> <p>Due to this modification, the private score improved from 0.37816 to 0.37426.</p>
ICR - Identifying Age-Related Conditions
1699th Place Solution (partial) for the ICR - Identifying Age-Related Conditions Competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1><strong>Gold zone solution (privateLB=0.351) using Cosine Similarity</strong></h1> <p>First of all, I would like to thank InVitro Cell Research, LLC (ICR) and Kaggle for organizing and running this competition. I would also like to thank all the competitors who shared their views.<br> Congratulations to all the winners! Although I am not a winner, I would like to share one of the models used in the ensemble in my selected submission. In the model, Cosine Similarity Method was used, and the scores for the model alone were as follows: local 0.275, public 0.16, and private 0.35. (In fact, the selected notebooks were an ensemble (bagging) of three models, but by ensembling with more overfitted models, the results were not so good. Therefore, in order not to waste the reader's time, I dared to present only the model that worked well.)<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5800072%2Fc4f3a498f373170febd2dd6407994542%2F035.png?generation=1691893371996007&amp;alt=media" alt=""></p> <h2><strong>Context</strong></h2> <ul> <li>Business context: <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/overview" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions/overview</a></li> <li>Data context: <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data</a></li> </ul> <h2><strong>Overview of the Approach</strong></h2> <p><strong>Cosine Similarity Method</strong><br> The Cosine Similarity Method was developed to improve prediction performance using small training sets with large diversity [1]. In this method, 80% of the patient data that were similar in features to the test set were trained for each target. The procedure is shown below.</p> <ul> <li>Feature 'EJ' was replaced as ('A': 0, 'B': 1).</li> <li>Missing values were treated with fillna(-1).</li> <li>Of the 56 features, 49 features were used. (Note that the original statement is incorrect. I forgot to remove 'DU', 'FC', and 'FL', and mistakenly removed 'CU' instead.)</li> <li>The cosine similarity between the 49-dimensional feature vectors was calculated using the formula below. However, 0.001 was added to each feature value to avoid errors with zero feature values.</li> </ul> <pre><code> (): np.dot(v1, v2) / (np.linalg.norm(v1) * np.linalg.norm(v2)) </code></pre> <ul> <li>For each test data, the training data were sorted in order of high similarity and the top 494 (80%) were used for training. (In TabPFN calculation, SimpleImputer(median) was used.)</li> <li><a href="https://www.kaggle.com/datasets/carlmcbrideellis/tabpfn-019-whl" target="_blank">TabPFN (0.1.9) whl</a> was used for the model [2]. Epsilon was not used, nor was any post-processing such as p0 [p0 &gt; 0.89]. (The notebooks used for reference are <a href="https://www.kaggle.com/code/muelsamu/simple-tabpfn-approach-for-score-of-15-in-1-min" target="_blank">this</a> and <a href="https://www.kaggle.com/code/vadimkamaev/postprocessin-ensemble" target="_blank">this</a>.)</li> </ul> <h2><strong>Details of the submission</strong></h2> <p>When making predictions on training data, similar 494 data other than the target data were trained. The predictions are shown in Figure: train['class_1'] - train['Class'].<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5800072%2F9a0339001e1fee3a96029e5a53774c2e%2Fpredictions.png?generation=1691895047137531&amp;alt=media" alt=""><br> The <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/426536" target="_blank">proportion plot</a> is shown below.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5800072%2Fdd949e45980b86b39dca8e5219d19f45%2Fproportion.png?generation=1691895185645021&amp;alt=media" alt=""></p> <p>Because the data set was small, I felt that cross-validation did not work well and neglected local validation. In addition, the two figures above seemed to have room for improvement, so I implemented an ensemble that relied on public LB score. These are the points I reflected on.<br> On the other hand, the cosine similarity method itself was later found to perform well. In this method, only data with high similarity to others are used for training, and high generalization performance can be expected by choosing an appropriate selection rate.<br> The example above shows an 80% selection rate, but comparable results were obtained even with a 75% selection rate.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5800072%2Fefcab20d6fb3d0acb9b05edf95d54c39%2Fcomparison.png?generation=1692819217934261&amp;alt=media" alt=""></p> <h2><strong>Sources</strong></h2> <p>[1] H. Sakiyama, R. Motoki, T. Okuno, J.-Q. Liu, Improvement of Blood-Brain Barrier Permeability Prediction Using Cosine Similarity, J. Comput. Chem., Jpn. in press.<br> [2] N. Hollmann, S. Müller, K. Eggensperger, F. Hutter, TabPFN: A Transformer That Solves Small Tabular Classification Problems in a Second, <a href="https://arxiv.org/abs/2207.01848" target="_blank">arXiv:2207.01848</a>.</p>
ICR - Identifying Age-Related Conditions
19th Place Solution for the "ICR - Identifying Age-Related Conditions" Competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hey Kagglers,<br> I guess mine is one of the simplest approach for this competition.</p> <p><br></p> <p><strong>Solution:</strong> <br> <a href="https://github.com/SunilGolden/Kaggle-ICR" target="_blank">https://github.com/SunilGolden/Kaggle-ICR</a></p> <p><br></p> <p><strong>Approach</strong></p> <ul> <li>Under sampled training data</li> <li>Imputed null values with zero</li> <li>Encode categorical column using Ordinal Encoder</li> <li>Scaled other columns using Min Max Scaler</li> <li>Used k-fold cross validation to evaluate TabPFN, XGBoost, CatBoost, HGBoost, Light GBM, Random Forest, AdaBoost, GBM, SVM models and a few versions of their emsembles with balanced log loss</li> <li>Finally, I trained XGBoost, CatBoost, HGBoost, Light GBM, Random Forest, and GBM models and then ensembled them.</li> </ul> <p><br></p>
CAFA 5 Protein Function Prediction
A brief overview of 16th-place LB solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CAFA 5 Protein Function Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>It was an incredible adventure and the first competition I ever took part in! I was very lucky to find an experienced teammate - <a href="https://www.kaggle.com/ogurtsov" target="_blank">@ogurtsov</a> , who was the leader of our team. This journey was not just about algorithms and data; it was a journey of exploration, curiosity, and learning, so thanks to the organizers, fellow participants, and the Kaggle community!</p> <p>Here is a brief overview of our 16th-place LB solution:</p> <p><strong>1) Data:</strong><br> We used the initial train set and also annotations from the UniProt and QuickGO databases (filtered terms with NOT qualifier, evidence code "ND", obsolete terms, and those that are not present in the competition OBO file) in two ways: to update the train set (proteins with experimentally confirmed terms only) and to merge with predictions (test proteins with terms never experimentally confirmed).</p> <p><strong>2) Features</strong></p> <p>ESM2 3B protein sequence embeddings kindly shared by <a href="https://www.kaggle.com/andreylalaley" target="_blank">@andreylalaley</a> with Z-score normalization (means and SDs of train data applied to train, validation, and test).</p> <p><strong>3) Targets:</strong><br> Our target selection involved selecting the top 500 GO terms based on their frequency, or terms with a frequency above a certain threshold (BP &gt; 250, MF and CC &gt; 50).</p> <p><strong>4) Neural Networks:</strong></p> <p>The modeling strategy involved multiple Multi-Layer Perceptrons (MLPs) with different hyperparameters and layers, written with Torch or Tensorflow, R language. Each was trained on the initial or updated training set and different sets of targets. To ensure robustness, we employed cross-validation (CV) with 10 folds (one for validation and 9 for training) followed by averaging the predictions. For some models, we found it more effective to validate on train proteins only, in these cases, we used randomized 10-fold CV to avoid reducing the validation set. Importantly, we trained a separate MLP for each ontology, each time only on these proteins that have at least one term of the selected top.</p> <p>UPD: For all MLPs we used AdamW optimizer, <a href="https://torch.mlverse.org/docs/reference/lr_one_cycle.html" target="_blank">once cycle learning rate</a> and early stopping callback (max 100 epoch, monitored metric - valid loss, with patience 1, 3 or 5) or <a href="https://rdrr.io/cran/keras/man/callback_reduce_lr_on_plateau.html" target="_blank">callback reducing the learning rate</a> by a factor of 0.5 (patience = 1)</p> <p><strong>5) Post-processing:</strong><br> First, we filtered the predictions using taxon constraints to eliminate terms incompatible with certain species. Also, after each step, we made corrections to ensure that the predictions of parent terms were not smaller than the predictions of child terms (in such cases we propagated max prediction of child terms). Finally, before the final merge with GO annotations, the predictions were filtered again to include only terms in the top 450 by frequency based on the original training set and also rare terms if they had predictions &gt; 0.5.</p> <p><strong>6) Ensemble:</strong><br> Predictions from 6 MLPs were simply averaged. The resulting predictions were averaged again with exported GO annotations (the set of test proteins with terms never experimentally confirmed).</p> <p>For the secondary solution, we did weighted averaging with GO annotations and selected the top 35 predictions for each ontology.</p> <p>I hope that this will be beneficial for others, serving as an example of what can be done (if we shake up) or what should be avoided (if we shake down at the end) 😆.</p>
Google - American Sign Language Fingerspelling Recognition
2nd place solution - Test & Compare ASR algorithms
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google - American Sign Language Fingerspelling Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Once again, thanks to Kaggle, Google and other organizers for hosting this exciting competition. I felt this one as well as the previous competition, is well-organized so that we can try various ideas and could learn a lot from each other. Hope this kinds of competition open in kaggle more often.. :)</p> <h2>TLDR</h2> <p>My overall solution is almost identical to the previous competition. It's mainly just the adoption of ASR(Automatic Speech Recognition) algorithms (joint CTC + Attention) on top of the previous 1st place model. The details of the model, preprocessing, and training are largely similar to the previous competition, so please refer to the <a href="https://www.kaggle.com/competitions/asl-signs/discussion/406684" target="_blank">previous competition solution</a> for more details on model/training/augmentations/etc.</p> <h2>ASR Algorithms</h2> <p>I had a deep interest in ASR, and after recognizing its correlation with the competition, I studied and experimented with various algorithms. While I couldn't find anything particularly superior to the widely used vanilla CTC among the numerous ASR algorithms, I learned a lot through various experiments, and I will mainly share those insights.</p> <p>As I began studying ASR for this competition, I discovered from reading papers that current NN ASR algorithm mainly consists of CTC, Attention-based, and Transducer. I implemented all three (though there are more diverse algorithms out there). In conclusion, from a baseline performance perspective, all three algorithms were quite similar and each have pros &amp; cons. Here are the insights I gathered from implementing each algorithm:</p> <ul> <li><p>CTC:</p> <ul> <li>During greedy decoding, the prediction step = O(1). Therefore, when using a Single model, GreedyCTC is the most efficient. (this is probably why many solutions use a large single model with CTC).</li> <li>With beam search, prediction step = O(L) (where L = encoder output length). Depending on the efficiency of the algorithm, the computational overhead isn't very large, so the overhead by introducing beam search isn't very significant. However, the performance improvement from using beam search is minimal (+0.003), making it not very tempting(as its tflite-convertible implementation is not quite trivial) compared to Attention.</li> <li>As discussed, there's no guarantee of alignment between model prediction timesteps, so simple average ensembling can't be applied.</li></ul></li> <li><p>Attention-based:</p> <ul> <li>Uses an autoregressive approach, so even with greedy decoding, prediction step = O(N) (where N = decoder output length).</li> <li>If the decoder is an RNN or Transformer, stateful inference(i.e. previous key, value caching with Transformer) can be used to reduce the complexity. In actual implementation with Transformer, it reduced the inference time on the CPU by about 20~30%.</li> <li>Beam search is possible, but unlike CTC, it requires introducing appropriate heuristics to penalize the output length. Although several attempts were made, none worked. </li> <li>It's easier to apply ensembling with Attention. Simply take the average of each model at each prediction step, and it works well (ensembling three models gave +0.009).</li> <li>More room for score improvement than CTC(ex more decoder layers with augmentations).</li></ul></li> <li><p>Transducer:<br> Prediction steps = O(L) (where L = encoder output length). Due to the most prediction steps in a greedy manner, it's not very efficient. I didn't consider it further as optimization was harder and performance was slightly lower compared to CTC or Attention. However, it has the advantage of real-time recognition in streaming mode if the encoder is causal (which wasn't relevant for this competition).</p></li> </ul> <p>Rough single model inference time on test dataset with kaggle kernel is as follows.</p> <blockquote> <p>CTC greedy(~40mins) &lt; AttentionGreedy=CTCBeamsearch(~1h20mins) &lt;= CTCAttentionJointGreedy(~1h30mins)</p> </blockquote> <h2>CTC-Attention Joint Training &amp; Decoding</h2> <p>As both CTC and Attention showed similar performance, I tried to find a method to utilize both techniques. I primarily referred to the following two papers:</p> <p><em>Joint CTC-Attention Based End-To-End Speech Recognition Using Multi-Task Learning, Kim et al. 2017.</em><br> <a href="url" target="_blank">https://arxiv.org/pdf/1609.06773.pdf</a><br> <em>Joint CTC/attention decoding for end-to-end speech recognition, Hori et al. 2017.</em><br> <a href="url" target="_blank">https://aclanthology.org/P17-1048.pdf</a></p> <p>Joint CTC-Attention Training is, as the name implies, adding both a CTC decoder (single GRU layer) and an attention decoder (single Transformer decoder layer) to one encoder for multitask learning. The loss weight is set to CTC=0.25 and attention=0.75. But the Joint training itself did not bring noticeable performance boost.</p> <p>By using the CTC prefix score, it is possible to calculate the probability of an arbitrary output hypothesis "h" without depending on the output timestep of the CTC output. In other words, <strong>implementing the CTC prefix score computation allows for ensembling outputs not only between CTC models but also between CTC and Attention models</strong>. CTC-attention joint decoding with CTC weight=0.3 showed a performance improvement of +0.007~8 without much impact on inference time. It was especially challenging for me to implement it accurately, efficiently, and without any issues to be compatible with tflite.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5003978%2Fabde23b44b90f8ba6adc7d56ca874d86%2F.drawio-3.svg?generation=1692977029060458&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5003978%2F22c798e256f66322a0cc9e42b976d570%2F.drawio-4.svg?generation=1692977118893507&amp;alt=media" alt=""></p> <h2>Preprocessing</h2> <p>Similar to the previous competition solution, but simpler. Every landmark and xyz was used and standardized. Flip left-handed signer(rather than augmentation). MAX_LEN=768 was used. Any hand-crafted feature was not significant, likely due to the absence of complex relations between movements in frames.</p> <h2>Model</h2> <p>Encoder is the same as the previous competition (Stacked Conv1DBlock + TransformerBlock) but increased in size (expand ratio 2-&gt;4 in Conv1DBlock) and depth (8 layers -&gt; 17 layers). Single model has ~6.5M parameters. I applied padding='same'(rather than 'causal') and output stride=2 (requires slightly more logic for handling masking). In my case, mixing in Transformer blocks wasn't as effective as in the previous competition. Perhaps global features were less crucial in this comp. Additionally, I added one BN to input of the Conv1DBlock for more training stability(especially with awp + more epochs).</p> <p>CTC decoder used a single GRU layer followed by one FC layer. Attention Decoder used a single-layer Transformer decoder. Introducing augmentation to the Decoder input and adding up to 4 Decoder layers improves the performance of the attention decoder (up to +0.004). However, considering the number of parameters and inference speed, I considered it inefficient and thus used a single-layer decoder.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5003978%2F7d1f7aee69242af5e6aa5ef19c9460f6%2Fmodeldesign.drawio-2.svg?generation=1692993585923781&amp;alt=media" alt=""></p> <h2>Augmentations</h2> <ul> <li>Random resample (0.5x ~ 1.5x to original length)</li> <li>Random Affine</li> <li>Random Cutout</li> <li>Random token replacement on decoder input(prob=0.2)</li> </ul> <p>What I overlooked this time is that augmentations which showed no performance improvement or even degraded performance in shorter epochs might actually help improve performance in longer epochs. I experienced a similar phenomenon in the previous competition, but it seemed more pronounced in this one. By selecting augmentations solely based on the results from a 60epoch experiment, I think I missed out on many potentially beneficial augmentations when testing the 400epoch training in the final week.</p> <h2>Training</h2> <p>Epoch = 400<br> bs = 16 * num_replicas = 128<br> Lr = 5e-4 * num_replicas = 4e-3<br> AWP = 0.2 starts at 0.1 * Epoch<br> Schedule = CosineDecay with warmup ratio 0.1<br> Optimizer = AdamW (slightly better than RAdam with Lookahead)<br> Loss = CTC(weight=0.25) + CCE with label smoothing=0.1~0.25(weight=0.75)</p> <p>Training takes around 14 hours with colab TPUv2-8(as colab TPU runtime recently reduced to 3~4 hours, needed 4 consecutive sessions to complete training).<br> Longer Epoch always gave better CV(5fold split by id) and LB but got no time to try over 400 epochs.</p> <h2>LB history</h2> <table> <thead> <tr> <th></th> <th>public LB</th> <th>private LB</th> </tr> </thead> <tbody> <tr> <td>prevcompsinglemodel + CTC (or Attention)</td> <td>0.76</td> <td>0.74</td> </tr> <tr> <td>+ deeper and wider model, add pose</td> <td>0.79</td> <td>0.78</td> </tr> <tr> <td>+ 3 seed ensemble with Attention</td> <td>0.80</td> <td>0.79</td> </tr> <tr> <td>+ ctc attention joint decoding</td> <td>0.81</td> <td>0.80</td> </tr> <tr> <td>+ use all landmarks, longer epoch</td> <td>0.82</td> <td>0.81</td> </tr> </tbody> </table> <p>Seeing solutions from other kagglers, not only the top solutions but also the public notebooks is always inspiring. Thanks to kagglers who shared their insights and ideas as always. And big congrats to <a href="https://www.kaggle.com/christofhenkel" target="_blank">@christofhenkel</a> and <a href="https://www.kaggle.com/darraghdog" target="_blank">@darraghdog</a> for winning this highly competitive competition!</p> <p>Training/Inference code: <a href="https://www.kaggle.com/competitions/asl-fingerspelling/discussion/436873" target="_blank">https://www.kaggle.com/competitions/asl-fingerspelling/discussion/436873</a></p>
Google - American Sign Language Fingerspelling Recognition
Some Notes
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google - American Sign Language Fingerspelling Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Many more skilled participants have shared very rich and impressive solutions. I'm only documenting a few experimental insights to avoid adding something redundant to the discussion.</p> <h1>LB History</h1> <table> <thead> <tr> <th></th> <th>public LB</th> <th>private LB</th> </tr> </thead> <tbody> <tr> <td>max_len=188</td> <td>0.770</td> <td>0.741</td> </tr> <tr> <td>max_len=320</td> <td>0.776</td> <td>0.744</td> </tr> <tr> <td>Post Process by Chris Deotte</td> <td>0.778</td> <td>0.747</td> </tr> <tr> <td>expand ratio 2-&gt;4</td> <td>0.779</td> <td>0.752</td> </tr> <tr> <td>epochs 126-&gt;200</td> <td>0.780</td> <td>0.752</td> </tr> </tbody> </table> <h1>What Didn't Work</h1> <ol> <li><p>I attempted various normalization techniques, including global normalization, local normalization, and normalization centered around the nose. However, I'm unsure if there were any experimental errors, as I found that these approaches merely expedited the training convergence without improving the leaderboard performance. I'm uncertain if my conclusion is accurate.</p></li> <li><p>Lowering the dropout of the final layer from 0.4 to 0.1 and removing dropout from all other layers would result in a decrease of 0.001 points in both the public test set and the private test set.</p></li> <li><p>To place the batch normalization (BN) before the depthwise convolution (DW Conv) instead of after would result in a decrease of 0.001 points for both the public LB (Leaderboard) and private LB (Leaderboard).</p></li> <li><p>Spatial Mask. It may lead to training instability when used. If other aspects are handled well (such as normalization), it might be usable.</p></li> </ol>