Title of Competition
stringclasses 343
values | Title of Writeup
stringlengths 6
139
| User
stringclasses 1
value | Writeup
stringlengths 419
80.4k
|
---|---|---|---|
Freesound Audio Tagging 2019 | 2nd place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Freesound Audio Tagging 2019
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>😀 </p>
<p>hah</p>
<p>Thanks for the competition and congratulations of all participants! Here is a summary of our solution.
code: <a href="https://github.com/qrfaction/1st-Freesound-Audio-Tagging-2019">https://github.com/qrfaction/1st-Freesound-Audio-Tagging-2019</a></p>
<h3>Solution</h3>
<p>single model CV: 0.89763
ensemble CV: 0.9108</p>
feature engineering
<ul>
<li>log mel (441,64) (time,mels)</li>
<li>global feature (128,12) (Split the clip evenly, and create 12 features for each frame. local cv +0.005)</li>
<li>length</li>
</ul>
<p><code>
def get_global_feat(x,num_steps):
stride = len(x)/num_steps
ts = []
for s in range(num_steps):
i = s * stride
wl = max(0,int(i - stride/2))
wr = int(i + 1.5*stride)
local_x = x[wl:wr]
percent_feat = np.percentile(local_x, [0, 1, 25, 30, 50, 60, 75, 99, 100]).tolist()
range_feat = local_x.max()-local_x.min()
ts.append([np.mean(local_x),np.std(local_x),range_feat]+percent_feat)
ts = np.array(ts)
assert ts.shape == (128,12),(len(x),ts.shape)
return ts
</code></p>
prepocess
<ul>
<li>audio clips are first trimmed of leading and trailing silence</li>
<li>random select a 5s clip from audio clip</li>
</ul>
model
<p>For details, please refer to code/models.py
* Melspectrogram Layer(code from kapre,We use it to search the hyperparameter of log mel end2end)
* Our main model is a 9-layer CNN.
In this competition, we consider that the two axes of the log mel feature have different physical meanings,
so the max pooling and average pooling in the model are replaced
by one axis using max pooling and the other axis using average pooling.
(Our local cv gain a lot from it, but the exact number is forgotten).
* global pooling: pixelshuffle + max pooling in time axes + ave pooling in mel axes.
* se block(several of our models use se block)
* highway + 1*1 conv(several of our models use se block)
* label smoothing</p>
<p>```</p>
<h1>log mel layer</h1>
<p>x_mel = Melspectrogram(n_dft=1024, n_hop=cfg.stride, input_shape=(1, K.int_shape(x_in)[1]),
# n_hop -> stride n_dft kernel_size
padding='same', sr=44100, n_mels=64,
power_melgram=2, return_decibel_melgram=True,
trainable_fb=False, trainable_kernel=False,
image_data_format='channels_last', trainable=False)(x)
<code>
</code></p>
<h1>pooling mode</h1>
<p>x = AveragePooling2D(pool_size=(pool_size1,1), padding='same', strides=(stride,1))(x)
x = MaxPool2D(pool_size=(1,pool_size2), padding='same', strides=(1,stride))(x)
<code>
</code></p>
<h1>model head</h1>
<p>def pixelShuffle(x):
_,h,w,c = K.int_shape(x)
bs = K.shape(x)[0]
assert w%2==0
x = K.reshape(x,(bs,h,w//2,c*2))</p>
<pre><code># assert h % 2 == 0
# x = K.permute_dimensions(x,(0,2,1,3))
# x = K.reshape(x,(bs,w//2,h//2,c*4))
# x = K.permute_dimensions(x,(0,2,1,3))
return x
</code></pre>
<p>x = Lambda(pixelShuffle)(x)
x = Lambda(lambda x: K.max(x, axis=1))(x)
x = Lambda(lambda x: K.mean(x, axis=1))(x)
```</p>
data augmentation
<ul>
<li>mixup (local cv +0.002, lb +0.008)</li>
<li>random select 5s clip + random padding</li>
<li>3TTA</li>
</ul>
pretrain
<ul>
<li>train a model only on train_noisy as pretrained model</li>
</ul>
ensemble
<p>For details, please refer to code/ensemble.py
* We use nn for stacking,
which uses localconnect1D to learn the ensemble weights of each class,
then use fully connect to learn about label correlation,
using some initialization and weight constraint tricks.
```
def stacker(cfg,n):
def kinit(shape, name=None):
value = np.zeros(shape)
value[:, -1] = 1
return K.variable(value, name=name)</p>
<pre><code>x_in = Input((80,n))
x = x_in
# x = Lambda(lambda x: 1.5*x)(x)
x = LocallyConnected1D(1,1,kernel_initializer=kinit,kernel_constraint=normNorm(1),use_bias=False)(x)
x = Flatten()(x)
x = Dense(80, use_bias=False, kernel_initializer=Identity(1))(x)
x = Lambda(lambda x: (x - 1.6))(x)
x = Activation('tanh')(x)
x = Lambda(lambda x:(x+1)*0.5)(x)
model = Model(inputs=x_in, outputs=x)
model.compile(
loss='binary_crossentropy',
optimizer=Nadam(lr=cfg.lr),
)
return model
</code></pre>
<p>```</p>
<p>Yes, very happy. tks</p>
<p>notks</p>
<p>tks!</p>
<p>I only considered the physical meaning, ave and max pooling are the results of my experiment.</p>
<p>I have sent you a WeChat account via email</p> |
Freesound Audio Tagging 2019 | 3rd place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Freesound Audio Tagging 2019
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I started using 1-d CNN from the very beginning and got 0.65 LB, but then I switched to 2-d CNN and immediately got around 0.7 LB. Then I spent a couple of weeks improving 2-d model (mostly by searching for a better loss function, ways to utilize the noisy subset, and playing with audio augmentations) and finally reached around 0.72 LB with a single model. Surprisingly, when I retrained my old 1-d CNN model with the new augmentations and training regime, the gap between 1-d version and 2-d version greatly shrank, giving me about 0.705 for a 1-d model. By blending these two I got around 0.73 LB.</p>
<p>Hello everyone! </p>
<p>At first, congratulations to all participants and to all who got any medals or any new ranks!</p>
<p>Below I want to outline my solution. My approach is quite similar to many approaches that were previously shared on the forum, so I will focus only on that tricks and methods that I find important or which weren't mentioned in other's solutions.</p>
<p><strong>Models</strong></p>
<p>I used two types of models, both are based on convolutions. The first type uses 2d convolutions and works on top of mel-scale spectrograms, while the second uses 1d-convolutions on top of raw STFT representations with relatively small window size like 256, so it's only 5 ms per frame or so. Both types of models are relatively shallow and consist of 10-12 convolutional layers (or 5-6 resnet blocks) with a small number of filters. I use a form of deep supervision by applying global max pooling after each block (typically starting from the first or second block) and then concatenating maxpool outputs from each layer to form the final feature vector which then goes to a 2-layer fully-connected classifier. I also tried using RNN's instead of a max pooling for some models. It made results a bit worse, but RNN seemed to make different mistakes, so it turned out to be a good member of the final ensemble.</p>
<p><strong>Frequency encoding</strong></p>
<p>2d convolutions are position-invariant, so the output of a convolution would be the same regardless of where the feature is located. Spectrograms are not images, Y-axis corresponds to signal frequency, so it would be nice to assist a model by providing this sort of information. For this purpose, I used a linear frequency map going from -1 to 1 and concatenated it to input spectrogram as a second channel. It's hard to estimate now without retraining all the models how much gain I got from this little modification, but I can say It was no less than 0.005 in terms of local CV score.</p>
<p><strong>This is not really a classification task</strong></p>
<p>Most teams treated the problem as a multilabel classification and used a form of a binary loss such as binary cross entropy or focal loss. This approach is definitely valid, but in my experiments, it appeared to be a little suboptimal. The reason is the metric (lwlrap) is not a pure classification metric. Contrary to accuracy or f-score, it is based on <em>ranks</em>. So it wasn't really a surprise for me when I used a loss function based on ranks rather than on binary outputs, I got a huge improvement. Namely, I used something called LSEP (<a href="https://arxiv.org/abs/1704.03135">https://arxiv.org/abs/1704.03135</a>) which is just a soft version of pairwise rank loss. It makes your model to score positive classes higher than negative ones, while a binary loss increases positive scores and decreases negative scores independently. When I switched to LSEP from BCE, I immediately got approximately 0.015 of improvement, and, as a nice bonus, my models started to converge much faster.</p>
<p><strong>Data augmentation</strong></p>
<p>I used two augmentation strategies. The first one is a modified MixUp. In contrast to the original approach, I used OR rule for mixing labels. I did so because a mix of two sounds still allows you to hear both. I tried the original approach with weighted targets on some point and my results got worse.</p>
<p>The second strategy is augmentations based on audio effects such as reverb, pitch, tempo and overdrive. I chose the parameters of these augmentations by carefully listening to augmented samples.</p>
<p>I have found augmentations to be very important for getting good results. I guess the total improvement I got from these two strategies is about 0.05 or so. I also tried several other approaches such as splitting the audio into several chunks and then shuffling them, replacing some parts of the original signals with silence and some other, but they didn't make my models better.</p>
<p><strong>Training</strong></p>
<p>I used quite large audio segments for training. For most of my models, I used segments from 8 to 12 seconds. I didn't use TTA for inference and used full-length audio instead.</p>
<p><strong>Noisy data</strong></p>
<p>I tried several unsupervised approaches such as <a href="https://arxiv.org/abs/1807.03748">Contrastive Predicting Coding</a>, but never managed to get good results from it.</p>
<p>I ended up applying a form of iterative pseudolabeling. I predicted new labels for the noisy subset using a model trained on curated data only, chose best 1k in terms of the agreement between the predicted labels and actual labels and added these samples to the curated subset with the original labels. I repeated the procedure using top 2k labels this time. I applied this approach several times until I reached 5k best noisy samples. At that point, predictions generated by a model started to diverge significantly from the actual noisy labels. I decided to discard the labels of the remaining noisy samples and simply used model prediction as actual labels. In total, I trained approximately 20 models using different subsets of the noisy train set with different pseudolabeling strategies.</p>
<p><strong>Inference</strong></p>
<p>I got a great speed-up by computing both STFT spectrograms and mel spectrograms on a GPU. I also grouped samples with similar lengths together to avoid excessive padding. These two methods combined with relatively small models allowed me to predict the first stage test set in only 1 minute by any of my models (5 folds). </p>
<p><strong>Final ensemble</strong></p>
<p>For the final solution, I used 11 models trained with slightly different architectures (1d/2d cnn, rnn/no-rnn), slightly different subsets of the noisy set (see "noisy data" section) and slightly different hyperparameters. </p>
<p>Source code is available: <a href="https://github.com/ex4sperans/freesound-classification/tree/master">https://github.com/ex4sperans/freesound-classification/tree/master</a></p>
<p>I'll add a detailed README soon.</p>
<p>Hey! Well, in fact, I didn't compute predictions on each layer, but rather first collected features from each layer and then used all these features to make a prediction. So it's not exactly deep supervision in its original form, rather just something loosely inspired by it.</p> |
Freesound Audio Tagging 2019 | 4th solution: Multitask Learning, Semi-supervised Learning and Ensemble | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Freesound Audio Tagging 2019
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>In MixMatch, guessed labels are made by TTA (slicing and flip) of a training model (not a trained model). This is not suit for this competition data because TTA is not so effective like image data. </p>
<p>We wrote up a technical report of our solution, so you can check our solution in detail when it's published. Here I'd like to describe a brief summary of our solution. Our score is public 0.752 (3rd)/private 0.75787 (4th before pending). <br>
For this competition, we used 3 strategies mainly. <br>
1. Multitask learning with noisy labels.
2. Semi-supervised learning (SSL) with noisy data.
3. Averaging models trained with different time windows.
<img src="https://storage.googleapis.com/kaggle-forum-message-attachments/556685/13619/Fig1_3.png" alt="Multitask pipeline"></p>
<h2>Models</h2>
<ul>
<li>ResNet34 with log-mel</li>
<li>EnvNet-v2 with waveform.</li>
</ul>
<h2>Preprocessing</h2>
<ul>
<li>128 mels</li>
<li>128 Hz (347 STFT hop size)</li>
</ul>
<p>Log-mel is converted from power to dB after all augmentations applied. Thereafter, it is normalized by the mean and standard deviation of each data.</p>
<h2>Augmentations</h2>
<h3>Log-mel</h3>
<ul>
<li>Slicing</li>
<li>MixUp</li>
<li>Frequency masking</li>
<li>Gain augmentation</li>
</ul>
<p>We tried 2, 4 and 8 seconds (256, 512 and 1024 dimensions) as a slicing-length and 4 seconds scores the best. Resizing, warping, time masking and white noise don't work.
- Additional slicing</p>
<p>Expecting more strong augmentation effect, after basic slicing, we shorten data samples in a range of 25 - 100% of the basic slicing-length by additional slicing and extend to the basic slicing-length by zero paddings. </p>
<h3>Waveform</h3>
<ul>
<li>Slicing</li>
<li>MixUp</li>
<li>Gain augmentaion</li>
<li>Scaling augmentation</li>
</ul>
<p>We tried 1.51, 3.02 and 4.54 seconds (66,650, 133,300 and 200,000 dimensions) as a slicing length and 4.54 seconds scores the best. </p>
<h2>Training</h2>
<h3>ResNet</h3>
<ul>
<li>Adam</li>
<li>Cyclic cosine annealing (1e-3 -> 1e-6)</li>
<li>sigmoid and binary crossentropy</li>
</ul>
<h3>EnvNet</h3>
<ul>
<li>SGD</li>
<li>Cyclic cosine annealing (1e-1 -> 1e-6)</li>
<li>(sigmoid and binary crossentropy) or (SoftMax and KL-divergence)</li>
</ul>
<h2>Postprocessing</h2>
<p>Prediction using the full length of audio input scores better than prediction using slicing TTA. This may be because important components for classification is concentrated on the beginning part of audio samples. Actually, prediction with slices of the beginning part scores better than prediction with slices of the latter part. We found padding augmentation is effective TTA. This is an augmentation method that applies zero paddings to both sides of audio samples with various length and averages prediction results. We think this method has an effect to emphasize the start and the end part of audio samples.</p>
<h2>Multitask learning</h2>
<p>In this task, the curated data and noisy data are labeled in a different manner, therefore treating them as the same one makes the model performance worse. To tackle this problem, we used a multitask learning (MTL) approach. The aim of MTL is to get synergy between 2 tasks without reducing the performance of each task. MTL learns features shared between 2 tasks and can be expected to achieve higher performance than learning independently. In our proposal, an encoder architecture learns the features shared between curated and noisy data, and the two separated FC layers learn the difference between the two data. In this way, we can get the advantages of feature learning from noisy data and avoid the disadvantages of noisy label perturbation. <br>
Multitask module's components: <br>
<code>FC(1024)-ReLU-DropOut(0.2)-FC(1024)-ReLU-DropOut(0.1)-FC(80)-sigmoid</code> <br>
we used BCE as a loss function. The loss weight ratio of curated and noisy is set as 1:1. By this method, CV lwlrap improved from 0.829 to 0.849 and score on the public LB increased + 0.021.</p>
<h2>Semi-supervised learning</h2>
<p>Applying SSL for this competition is difficult because of 2 reasons.
1. Difference of data distribution between labeled data and unlabeled data.
2. Multi-label classification task. </p>
<p>In particular, the method that generates labels online like Mean teacher or MixMatch tends to collapse. We tried pseudo label, Mean Teacher and MixMatch and all of them are not successful. <br>
Therefore, we propose an SSL method that is robust to data distribution difference and can handle multi-label data. For each noisy data sample, we guess the label using the trained model. The guessed label is sharpened by sharpening function proposed by MixMatch (soft pseudo label). As the temperature of sharpening function, we tried a value of 1, 1.5 or 2 and 2 was the best. Predictions of the trained model are obtained using snapshot ensemble with all the folds and cycle snapshots of 5-fold CV. We used MSE as a loss function. we set loss weight of semi-supervised learning as 20. <br>
By soft pseudo labeling, The CV lwlrap improved from 0.849 to 0.870. On the other hand, on the public LB, improvement in score was slight (+0.001). We used predictions of all fold models to generate soft pseudo label so that high CV may be because of indirect label leak. However, even if we use labels generated by only the same fold model which has no label leak, CV was improved as compared to one without SSL. The model trained with the soft pseudo label is useful as a component of model averaging (+0.003 on the public LB).</p>
<h2>Ensemble</h2>
<p>For model averaging, we prepared models trained with various conditions. Especially, we found that averaging with models trained with different time window is effective. We used models below. In order to reduce prediction time, the cycles and padding lengths used for the final submissions and averaging weights were chosen based on CV.
1. ResNet34 slice=512, MTL
2. ResNet34 slice=512, MTL, SSL
3. ResNet34 slice=1024, MTL
4. EnvNet-v2 slice=133,300, MTL, sigmoid
5. EnvNet-v2 slice=133,300, MTL, SoftMax
6. EnvNet-v2 slice=200,000, MTL, SoftMax</p>
<h2>Comparison</h2>
<p>condition/CV lwlrap <br>
model #1 / 0.868 <br>
model #2 / 0.886 <br>
model #3 / 0.862 <br>
model #4 / 0.815 <br>
model #5 / 0.818 <br>
model #6 / 0.820 <br>
#1 + #3 / 0.876 <br>
#1 + #2 + #3 / 0.890 <br>
#4 + #5 + #6 / 0.836 <br>
submission 1 (#1 + #2 + #3 + #4 + #5 + #6) / 0.896 <br>
submission 2 (no pad TTA) / 0.895 </p>
<h2>Code</h2>
<p>I did all model training on kaggle kernels. So I can share all of them. Please note that code readability is low. For example, DataLoader class is named "MFCCDataset" but it actually loads log-mel.
<a href="https://www.kaggle.com/osciiart/freesound2019-solution-links?scriptVersionId=16047723">link to the summary of codes</a> <br>
The GitHub repository is available at <a href="https://github.com/OsciiArt/Freesound-Audio-Tagging-2019">here</a>. </p>
<h2>Public LB history log</h2>
<p>model / CV / Public LB <br>
ResNet18, MixUp, freq masking, slice=256 / 0.819 / 0.669 <br>
<strong>ResNet34</strong>, MixUp, freq masking, slice=256 / 0.819 / 0.676 <br>
ResNet18, MixUp, freq masking, slice=256, <strong>MTL</strong> / 0.824 / 0.690 <br>
ResNet34, MixUp, freq masking, slice=256, MTL, <strong>snapshot ensemble cycle 1-4</strong> / 0.853 / 0.715 <br>
ResNet34, MixUp, freq masking, gain, slice=256, MTL, snapshot ensemble cycle <strong>1-8</strong> / 0.862 / 0.718 <br>
ResNet34 + <strong>EnvNet</strong> / 0.867 / 0.720 <br>
ResNet34, MixUp, freq masking, gain, <strong>slice=512</strong>, MTL, snapshot ensemble cycle 1-8 (#1)/ 0.860 / 0.731 <br>
<strong>ResNet34 #1</strong> + EnvNet / 0.866 / 0.735 <br>
ResNet34 #1 + EnvNet, <strong>padding</strong> / 0.874 / 0.737 <br>
ResNet34 #1 + <strong>ResNet34 (SSL, #2)</strong> + <strong>EnvNet #4 + #5 + #6</strong> / 0.887 / 0.743 <br>
ResNet34 #1, #2 + EnvNet #4 + #5 + #6, <strong>padding TTA</strong> / 0.893 / 0.746 <br>
ResNet34 #1, #2 + <strong>ResNet34(slice=1024, #3)</strong> + EnvNet #4 + #5 + #6, padding TTA / 0.896 / 0.752 </p>
<p>Thank you for letting me know. I made it public.</p> |
Freesound Audio Tagging 2019 | 6th place solution fastai | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Freesound Audio Tagging 2019
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Here are the write-up and code for my solution!</p>
<p><strong>Blog post:</strong> <a href="https://link.medium.com/Kv5kyHjcIX">https://link.medium.com/Kv5kyHjcIX</a>
<strong>Code:</strong> <a href="https://github.com/mnpinto/audiotagging2019">https://github.com/mnpinto/audiotagging2019</a></p>
<p><strong>Summary:</strong>
* Models: xresnets
* Image size: 256x256
* Mixup sampling from a uniform distribution
* Horizontal and Vertical Flip as new labels (total 320 labels)
* Compute loss only for samples with F2 score (with a threshold of 0.2) less than 1.
* Noisy data: ~3500 "good noisy samples" used the same way as curated data
* TTA: Slice clips each 128px in the time axis (no overlap), generate predictions for each slice and compute the <code>max</code> for each class.
* Final submissions: 1) Average of 2 models: public LB 0.742, private LB 0.74620; 2) Average of 6 models: public LB 0.742, private LB 0.75421.</p>
<p><strong>Additional observations:</strong>
* I found better results when using random crops of 128x128 and rescale to 256x256, comparing to random crops of 128x256 and rescale to 256x256, I was expecting the opposite.
* I wonder why max_zoom=1.5 works; I would not expect so.</p>
<p><strong>Acknowledgements:</strong>
<a href="/daisukelab">@daisukelab</a> thanks for the code to generate the mel spectrograms! Thanks to everyone that contributed in the discussions or with kernels. And finally, thanks to the organizers for this great competition!</p>
<p><a href="/action">@action</a>, I'm not sure how much it did improve, but after the competition is finalised, I will check that and give some feedback. Perhaps what I also need to check is how does this compare with ensembling four models, one for each "flip type".</p>
<p>Thanks, <a href="/ebouteillon">@ebouteillon</a>! And congrats for your great results also! The attention layer makes the training slower though. I will try to understand the performance of different models better after the competition is finalised.</p>
<p><a href="/zacheism">@zacheism</a> You are welcome, I'm glad my solution did help on your project. I choose a uniform distribution because for audio spectrograms combining clips seems more natural than combining regular pictures. But I think I didn't compare the two, at least in my final models. I would be interested to know if you find differences between the two approaches in your experiments.</p>
<p><a href="/daisukelab">@daisukelab</a> Thanks! Your code was a great starting point for many people in this competition.</p> |
Freesound Audio Tagging 2019 | 7th place solution with commentary kernel | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Freesound Audio Tagging 2019
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>We did not notice that effect. If we were carefully modeling with the meaning of resizing in mind, LB score might have been a bit better. Thank you.</p>
<h1>Freesound 7th place solution</h1>
<p>Thank you for all the competitors, kaggle teams, and the host of this competition!
We enjoyed a lot during the competition and learned many things. </p>
<p>We especially want to thank <a href="/daisukelab">@daisukelab</a> for his clear instruction with great kernels and datasets, <a href="/mhiro2">@mhiro2</a> for sharing excellent training framework, and <a href="/sailorwei">@sailorwei</a> for showing his Inception v3 model in his public kernel.</p>
<p>The points where the solution of our team seems to be different from other teams are as follows.</p>
<p>keypoint : Data Augmentation, Strength Adaptive Crop,Custom CNN, RandomResizedCrop</p>
<p>The detailed explanation is in the following kernel, so please read it. </p>
<p><a href="https://www.kaggle.com/hidehisaarai1213/freesound-7th-place-solution">https://www.kaggle.com/hidehisaarai1213/freesound-7th-place-solution</a> </p>
<p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/564097/13702/Shirogane_solution.png" alt="pipeline"> </p>
<h2>Data Augmentation</h2>
<p>We created 7 augmented training dataset with <a href="http://sox.sourceforge.net/">sox</a>.</p>
<ul>
<li>fade</li>
<li>pitch * 2</li>
<li>reverb</li>
<li>treble & bass * 2</li>
<li>equalize </li>
</ul>
<p>We trained a togal of 4970 * 8 samples without leaks.</p>
<h2>Crop policy</h2>
<p>We use random crop, because we use fixed image size(128 * 128).
Random crop got a little better cv than the first 2 seconds cropped.
At first, it wa cropped uniformly as mhiro's kernel.</p>
<h3>Strength Adaptive Crop</h3>
<p>Many sound crip has important information at the first few seconds.
Some sample has it in the middle of crip. However, due to the nature of recording, it is rare to have important information at th end of sounds.
The score drops about 0.03~0.04 when learning only the last few seconds.</p>
<p>Then, We introduce <strong>Strength Adaptive Crop</strong>.
We tried to crop the place where the total of db is high preferentially. </p>
<p>This method is very effective because most samples contain important information in places where the sound is loud. </p>
<p>CV 0.01 up <br>
LB 0.004~0.005 up </p>
<p>Detailed code is in <a href="https://www.kaggle.com/hidehisaarai1213/freesound-7th-place-solution">this kernel</a>.</p>
<h2>model structure</h2>
<ul>
<li>InceptionV3 3ch </li>
<li>InceptionV3 1ch </li>
<li>CustomCNN </li>
</ul>
<p>CustomCNN is carefully designed to the characteristics of the sound. The details are in <a href="https://www.kaggle.com/hidehisaarai1213/freesound-7th-place-solution">this kernel</a></p>
<h2>Augmentation in batch</h2>
<ul>
<li>Random erasing or Coarse Dropout</li>
<li>Horizontal Flip</li>
<li>mixup</li>
<li>Random Resized Crop (only InceptionV3) </li>
</ul>
<h2>Training strategy</h2>
<h3>TTA for validation</h3>
<p>When RandomResizedCrop is used, val score fluctuate,so if val tta is not used, an appropriate epoch can not be selected. So, we used tta for validation to ensure that validation can be properly evaluated. </p>
<h3>Stage 1 : pretrain with noisy data(warm-up)</h3>
<p>We used noisy data to 'pre-train' our model.
LB : about 0.01 up</p>
<h3>Stage 2 : Train with curated data 1</h3>
<p>We used curated data to 'finetune' the models, which were 'pre-trained' with noisy data.</p>
<h3>Stage 3 : Train with curated data 2(Inception only)</h3>
<p>We used stage 2 best weight to stage 3 training without random resized crop. We don't know why, but lwlrap goes up without random resized crop in Inception model.</p>
<h3>score</h3>
<p>Accurate single model public score not measured.
| | public|private |
|-----------|------------|------------|
|Inception 3ch|0.724 over|0.73865|
|Inception 1ch|??|0.73917|
|CustomCNN|0.720 over|0.73103|</p>
<h2>Ensemble</h2>
<p>(Inception 3ch + Inception 1ch + CustomCNN) / 3 </p>
<p>private score:0.75302</p> |
Freesound Audio Tagging 2019 | 96th Place Solution: validation of noisy data improve ~0.02 LB score | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Freesound Audio Tagging 2019
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><strong>Background:</strong> First time kaggler, no server with GPU locally, all computation was on Kaggle kernels.
<strong>Final Workflow:</strong>
1. Curated-only 5-fold CNN model got the optimal LB sscore
2. Preprocess noisy data to get a raw selected noisy dataset, by clustering 7 features (both temporal and spectral features)
3. Validate the selected noisy dataset by curated-only model
4. Train the mixed model with the curated and validated noisy data, got around 0.02 score boost.</p>
<p><strong>The detailed workflow pic is too big to display in the post, so here it is</strong> <a href="https://drive.google.com/file/d/1AaFDTsedh-3liF7K4V-QAyCWA8l8RtA_/view?usp=sharing">Final Setup</a>
- <strong>Features for deep learning:</strong>
After inspecting the distribution of the audio duration of both curated and test dataset, I've tried clip duration 1, 2, 3 seconds, mel channels 128, 180, 192, and other parameters combination. The best result came with the above config in the final setup. 0.01 and 0.02-3 LB boost from 1s, 3s to 2s respectively. 180 mel channels can improve the result by 0.02 LB, similar to 192 (choose the less feature to save computation time).
- <strong>Noisy Data Preprocessing:</strong>
- - This step is mainly for reducing the size of noisy dataset, for it's difficult to deal with all the noisy data in limited kaggle kernels.
- - Thanks to Essentia: <a href="https://essentia.upf.edu/documentation/algorithms_reference.html">https://essentia.upf.edu/documentation/algorithms_reference.html</a>
- - Found a critical error during writing this summary: use the wrong window size config(too short) for feature extraction, which may make this step useless and more like random pick of noisy data.
- <strong>Random pick:</strong>
The curated data would overfit if used all frames from the audio files. Tried max = 5 or 10, 5 gave the better result, 0.01 LB boost from max = 10.
- <strong>Balance the data or not?</strong>
There is no time left for me to do the test with balanced dataset, because I chose the wrong balance method at first by both undersampling and upsampling each class to the average count, which produced a worse result because undersampling would reduce the useful data. When I realized this at a pretty late stage, I decided to use the unbalanced dataset, considering the lwlrap from curated-only model didn't show much bias against the imbalance. But this is still an open question and any discussion is welcome.
- <strong>Network Choice</strong>
Tried several network setups, CRNN with GRU as well (produced competitive result with curated-only data, but no time to tune the config), 1d RNN network with raw data as input was built and tried but performed badly. So finally chose VGG16 and did some experiments on the last layers.
Key takeway:
1. 2 FC layers with 2048 neurons produced worse local lwlrap than 1 FC with 512, and may produce memory error in kaggle kernel. Though not fully tested, it looks in my experiments that the single FC at last model stage produced better result.
2. Thanks to a newly published paper <em><a href="https://arxiv.org/pdf/1905.05928.pdf">Rethinking the Usage of Batch Normalization and Dropout in the Training of Deep Neural Networks</a></em>, my LB score boosted 0.03 by changing the unit order accordingly, that is BN-Dropout before the weight layer.
3. Dropout rate from 0.2 to 0.5 to 0.7, the LB score boosted by 0.031 and 0.001. Though 0.5 and 0.7 produced similar result, chose 0.7 may make it more robust to overfitting.
4. Batch size didn't show much difference in my experiments (64, 128,256 batch sizes).
5. The more training fold, the better performance, but it requires more training time. And 10-fold improved only 0.004 compared to 5-fold. So used 5-fold at last for both curated and mixed model.
6. The CRNN network showed different lwlrap per class result, and may be a complement to CNN. (But as a beginner in sound scene classification and my future focus is model interpretability, I'd rather not to use ensembling but focus on finding a single optimal network this time)
- <strong>Data Augmentation:</strong>
SpecAugment without time warping, and mixup are used.
- <strong>Noisy Data Validation:</strong>
*The confidence value = r * label_weight_per_class / lwlrap_per_class*
The greater <em>r</em> is, the stricter the validation rule is. The best score is produced when r=5.
- <strong>Test Setup:</strong>
Found no improvement with any TTA, so simply use the average of the predicetions from all frames.</p>
<p>Thanks to Kaggle and the competition organization. I've learnt a lot along this competition. Good luck to all in the second stage.p</p> |
Freesound Audio Tagging 2019 | 9th place solution: smaller and faster | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Freesound Audio Tagging 2019
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>The key point with 64 mel is using small models with optimized architecture. Regular computer vision models require 128 mels, and also may be up-scaling as we did in our first attempts, that makes the models quite slow at training and inference. Meanwhile with 64 mel setup we could reach 0.715 public LB score for a single 5-fold model trained within one kernel (~6 hours). Ensembling boosted it to 0.739.</p>
<p>We will release the kernel used for training the models when private LB is available.</p>
<p><a href="/ebouteillon">@ebouteillon</a>, I wish u also the best luck. The idea suggested by <a href="/theoviel">@theoviel</a> about 64 mels is really unexpected to work. However, when I tried to switch back to 128 mels in our best setup later, I didn't see improvement. The thing could be also that for 128 mels ~100 epochs (in total) was not enough since stage1 Dnet models we trained for 300-400 epochs.</p>
<p>Thanks, and good luck at the stage 2.
We have applied it to all samples in a batch, and we did not use any special consideration for any classes. The implementation is borrowed from <a href="https://github.com/facebookresearch/mixup-cifar10/blob/master/train.py">here</a>. </p>
<p>You are welcome and thanks, u2 best luck at the stage 2</p>
<p>Thanks, u2. And thank you for your kernels, I used one as a starting point for mel preprocessing.</p>
<p>First, I would like to congratulate all winers and participants of this competition and thank kaggle and the organizers for posting this interesting challenge and providing resources for participating in it. Also, I want to express my highest gratitude and appreciation to my teammates: <a href="/suicaokhoailang">@suicaokhoailang</a>, <a href="/theoviel">@theoviel</a>, and <a href="/yanglehan">@yanglehan</a> for working with me on this challenge. It was the first time when I worked with audio related task, and it was a month of quite intense learning. I would like to give special thanks to <a href="/theoviel">@theoviel</a> for working quite hard with me on optimization of our models and prepare the final submissions.</p>
<p>The code used for training our final models is available now at <a href="https://www.kaggle.com/theoviel/9th-place-modeling-kernel">this link</a> (private LB 0.72171 score, top 42, with a single model (5 CV folds) trained for 7 hours). Further details of our solution can be found in our <a href="https://storage.googleapis.com/kaggle-forum-message-attachments/563780/13697/DCASE2019_Challenge.pdf">DCASE2019 technical report</a>.</p>
<p>The <strong>key points</strong> of our solutions are the following (see details below): (1) Use of <strong>small and fast models with optimized architecture</strong>, (2) Use of <strong>64 mels</strong> instead of 128 (sometimes less gives more) with <strong>4s duration</strong>, (3) Use <strong>data augmentation and noisy data</strong> for pretraining.</p>
<p><strong>Stage 1</strong>: we started the competition, like many participants, with experimenting with common computer vision models. The input size of spectrograms was 256x512 pixels, with upscaling the input image along first dimension by a factor of 2. With this setup the best performance was demonstrated by DenseNet models: they outperformed the baseline model published in <a href="https://www.kaggle.com/mhiro2/simple-2d-cnn-classifier-with-pytorch">this kernel</a> and successfully used in the competition a year ago, also Dnet121 was faster. With using pretraining on full noisy set, spectral augmentation, and MixUp, CV could reach ~0.85+, and public score for a single fold, 4 folds, and ensemble of 4 models are 0.67-0.68, ~0.70, 0.717, respectively. Despite these models are not used for our submission, these experiments have provided important insights for the next stage.</p>
<p><strong>Stage 2</strong>:
<strong>Use of noisy data</strong>: It is the main point of this competition that organizers wanted us to focus on (no external data, no pretrained models, no test data use policies). We have used 2 strategies: (1) pretraining on full noisy data and (2) pretraining on a mixture of the curated data with most confidently labeled noisy data. In both cases the pretraining is followed by fine tuning on curated only data. The most confident labels are identified based on a model trained on curated data, and further details can be provided by <a href="/theoviel">@theoviel</a>. For our best setup we have the following values of CV (in stage 2 we use 5 fold scheme): training on curated data only - 0.858, pretraining on full noisy data - 0.866, curated data + 15k best noisy data examples - 0.865, curated data + 5k best noisy data examples - 0.872. We have utilized all 3 strategies of noisy data use to create a variety in our ensemble.</p>
<p><strong>Preprocessing</strong>: According to our experiments, big models, like Dnet121, work better on 128 mels and even higher image resolution, while the default model reaches the best performance for 64 mels. This setup also decreases training time and improves convergence speed. 32 mels also could be considered, but the performence drops to 0.856 for our best setup. Use of 4s intervals instead of traditional 2s has gave also a considerable boost. The input image size for the model is 64x256x1. We tried both normalization of data based on image and global train set statistics, and the results were similar. Though, our best CV is reached for global normalization. In final models we used both strategies to crease a diversity. We also tried to experiment with the fft window size but did not see a significant difference and stayed with 1920. One thing to try we didn't have time for is using different window sizes mels as channels of the produced image. In particular, <a href="https://arxiv.org/abs/1706.07156">this paper</a> shows that some classes prefer longer while other shorter window size. The preprocessing pipeline is similar to one described in <a href="https://www.kaggle.com/daisukelab/creating-fat2019-preprocessed-data">this kernel</a>.</p>
<p><strong>Model architecture</strong>: At stage 2 we used the model from <a href="https://www.kaggle.com/mhiro2/simple-2d-cnn-classifier-with-pytorch">this kernel</a> as a starting point. The performance of this base model for our best setup is 0.855 CV. Based on our prior positive experience with DensNet, we added dense connections inside convolution blocks and concatenate pooling that boosted the performance to 0.868 in our best experiments (model M1):
```
class ConvBlock(nn.Module):
def <strong>init</strong>(self, in_channels, out_channels, kernel_size=3, pool=True):
super().<strong>init</strong>()</p>
<pre><code> padding = kernel_size // 2
self.pool = pool
self.conv1 = nn.Sequential(
nn.Conv2d(in_channels, out_channels, kernel_size=kernel_size,
stride=1, padding=padding),
nn.BatchNorm2d(out_channels),
nn.ReLU(),
)
self.conv2 = nn.Sequential(
nn.Conv2d(out_channels + in_channels, out_channels,
kernel_size=kernel_size, stride=1, padding=padding),
nn.BatchNorm2d(out_channels),
nn.ReLU(),
)
def forward(self, x): # x.shape = [batch_size, in_channels, a, b]
x1 = self.conv1(x)
x = self.conv2(torch.cat([x, x1],1))
if(self.pool): x = F.avg_pool2d(x, 2)
return x # x.shape = [batch_size, out_channels, a//2, b//2]
</code></pre>
<p>```
The increase of the number of convolution blocks from 4 to 5 gave only 0.865 CV. Use of a pyramidal pooling for 2,3 and 4-th conv blocks (M2) gave slightly worse result than M1. Finally, our ultimate setup (M3) consists of 5 conv blocks with pyramidal pooling reached 0.872 CV. DenseNet121 in the same pipeline reached only 0.836 (DenseNet121 requires higher image resolution, 256x512, to reach 0.85+ CV). From the experiments, it looks for audio it is important to have nonlinear operations before size reduction by pooling, though we did not check it in details. We used M1, M2, and M3 to create variability in our ensemble. Because of the submission limit we checked performance of only a few models in public LB, with the best single model score 0.715.</p>
<p>This plot illustrates architecture of M3 model:
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1212661%2F4c15c1ec24d235c7834cc1ae94a3ca38%2FM3.png?generation=1561588228369529&alt=media" alt=""></p>
<p>Here, all tested models are summarized:
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1212661%2Ffaca75bb8fb9dc910a648d1c141313be%2Fmodels.png?generation=1561746099636890&alt=media" alt=""></p>
<p>And here the model performance for our best setup:
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1212661%2Ff7014dbf8de75685fff13c683384f068%2Fp.png?generation=1561588425442446&alt=media" alt=""></p>
<p><strong>Data augmentation</strong>:
The main thing for working with audio data is MixUp. In contrast to images of real objects, <a href="https://towardsdatascience.com/whats-wrong-with-spectrograms-and-cnns-for-audio-processing-311377d7ccd">sounds are transparent</a>: they do not overshadow each other. Therefore, MixUp is so efficient for audio and gives 0.01-0.015 CV boost. At the stage 1 the best results were achieved for alpha MixUp parameter equal to 1.0, while at the stage 2 we used 0.4. <a href="https://arxiv.org/abs/1904.08779">Spectral augmentation</a> (with 2 masks for frequency and time with the coverage range between zero and 0.15 and 0.3, respectively) gave about 0.005 CV boost. We did not use stretching the spectrograms in time domain because it gave lower model performance. In several model we also used <a href="https://arxiv.org/abs/1905.09788">Multisample Dropout</a> (other models were trained without dropout); though, it decreased CV by ~0.002. We did not apply horizontal flip since it decreased CV and also is not natural: I do not think that people would be able to recognize sounds played from the back. It is the same as training ImageNet with use of vertical flip.</p>
<p><strong>Training</strong>: At the pretraining stage we used one cycle of cosine annealing with warm up. The maximum lr is 0.001, and the number of epochs is ranged between 50 and 70 for different setups. At the stage of fine tuning we applied ReduceLROnPlateau several times to alternate high and low lr. The code is implemented with Pytorch. The total time of training for one fold is 1-2 hours, so the training of entire model is 6-9 hours. Almost all our models were trained at kaggle, and the kernel is at <a href="https://www.kaggle.com/theoviel/9th-place-modeling-kernel">this link</a>.
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1212661%2F51ade3b68a7ab201b15fb9736d618a7b%2Flr.png?generation=1561588570346398&alt=media" alt=""></p>
<p><strong>Submission</strong>:
Use of 64 mels has allowed us also to shorten the inference time. In particular, each 5-fold model takes only 1-1.5 min for generation of a prediction with 8TTA. Therefore, we were able to use an ensemble of 15 models to generate the final submissions.
The final remark about the possible reason of the gap between CV and public LB: <a href="https://arxiv.org/pdf/1904.05635v1.pdf">this paper</a> (Task 1B) reports 0.15 lwlrap drop for test performed on data recorded with different device vs the same device as used for training data. So, the public LB data may be selected for a different device, but we never know if it is true or not.</p>
<p><a href="/hwasiti">@hwasiti</a>, It is quite impressive that u could get that far just within 2 days, hope u also get a good score at the stage 2. Regarding your question, the main reason why I focuses mostly on dense connections (within the conv block and pyramid pooling) is that Dnet models worked much better than ResNet, ResNeXt, CBAM ResNeXt, NasNet, etc. at stage 1. Also, concat pooling gave quite noticeable boost in my early testing. Later I tried other things too, but they didn't work that well. The thing I didn't expect is that traditional 7x7 conv followed by pooling in the first layer doesn't really work here. Probably, if one just took Dnet121 and replace its first block, it could work quite well on 64 mels, but I didn't have time to check it. And thank you for your best wishes.</p>
<p>The kernel right now is available at <a href="https://www.kaggle.com/theoviel/9th-place-modeling-kernel">https://www.kaggle.com/theoviel/9th-place-modeling-kernel</a></p>
<p>Thanks so much, I'm happy to know that my kernels were useful.</p> |
Freesound Audio Tagging 2019 | A Naive Solution (28th in Private LB) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Freesound Audio Tagging 2019
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I'd like to thank organizers and all participants, and congratulate winners on excellent results !
For me, this is the first image(sound) competition in Kaggle and I have learned a lot.</p>
<p>Many top teams unveiled their nice solutions(also scores). So I'm ashamed to publish my solution here, but I will do for a memento. </p>
<hr>
<h1>Local Validation</h1>
<p>I created 80class-balanced 5 folds validation and have evaluated the model performance with BCE. At the early stage in this competition, I have checked the correlation of BCE and LWLRAP. Below figure shows an almost good correlation. <br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F479538%2F01d7ab43832e5eb5eec8a139fdb69c19%2Ffig1.png?generation=1561761823509063&alt=media" alt=""></p>
<h1>Features</h1>
<ol>
<li>Remove silent audios</li>
<li>Trim silent parts</li>
<li>Log Mel Spectrogram ( SR 44.1kHz, FFT window size 80ms, Hop 10ms, Mel Bands 64 )</li>
<li>Clustered frequency-wise statistical features</li>
</ol>
<p>I did not do nothing special. But I will explain about <strong>4</strong>. <br>
I thought CNN can catch an audio property in frequency space like Fourier Analysis, but less in statistics values ( max, min, ... ). And labels of train-noisy are unreliable, so I created frequency-wise statistical features from spectrograms and compute cluster distances without using label information(Unsupervised). The procedure is following, </p>
<p>a. Compute 25 statistical values (Max, Min, Primary difference, Secondary difference, etc... ) and flatten 64 x 25 (=1600) features
b. Compute 200 cluster distances with MiniBatchKMeans ( dimensional reduction from 1600 to 200 )</p>
<p>This feature pushed my score about 0.5 ~ 1% in each model. </p>
<h1>Models, Train and Prediction</h1>
<p>In this competition, we do not have a lot of time to make inferences ( less than 1 GPU hour ). So I selected 3 relatively light-weight models. </p>
<ul>
<li>Mobile Net V2 with/without clustered features</li>
<li>ResNet50 with/without clustered features</li>
<li>DenseNet121 with/without clustered features</li>
</ul>
<p>My final submission is the ensemble of above 6 models ( 3 models x 2 type of features ). Here is a model pipeline. The setting of train and prediction with TTA is written in this figure. </p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F479538%2Fd21b6e837c1be18cc4a66bbbc82c3174%2Ffs2019_model-pipeline_01.png?generation=1561760818137707&alt=media" alt=""></p>
<h1>Performance</h1>
<p>I used the weighted geometric averaging to blend 6 model predictions. Below table shows each performance and the blending coefficients. Those coefficients are computed with optimization based on Out Of Fold predictions. LWLRAP values are calculated with 5 fold OOF predictions on train-curated data.</p>
<p>| Model | LWLRAP on train-curated | Blending Coefficient |
| --- | --- | --- |
| MobileNetV2 with clustered features( cf ) | 0.84519 | 0.246 |
| MobileNetV2 without cf | 0.82940 | 0.217 |
| ResNet50 with cf | 0.84490 | 0.149 |
| ResNet50 without cf | 0.83006 | 0.072 |
| DenseNet121 with cf | 0.84353 | 0.115 |
| DenseNet121 without cf | 0.83501 | 0.201 |
| <strong>Blended</strong> | 0.87611 ( Private 0.72820 ) | --- |</p>
<p>Thank you very much for reading to the end. <br>
See you at the next competition !</p>
<p><a href="/bossimuimu">@bossimuimu</a>
Thanks for your question. <br>
I made a helper function, modified version in <a href="https://www.kaggle.com/titericz/lightgbm-simple-solution-lb-0-203">this kernel</a></p>
<p>Hope this is what you asked. </p>
<p>```
def create_features(logmel_fn):
## load log-mel array
lm_ar = mel_0_1(np.load(logmel_fn))
lm_df = pd.DataFrame(lm_ar)</p>
<pre><code>stat_l = []
stat_l.append(lm_df.mean(axis=1).values)
stat_l.append(lm_df.median(axis=1).values)
stat_l.append(lm_df.std(axis=1).values)
stat_l.append(lm_df.max(axis=1).values)
stat_l.append(lm_df.min(axis=1).values)
stat_l.append(lm_df.skew(axis=1).values)
stat_l.append(lm_df.mad(axis=1).values)
stat_l.append(lm_df.kurtosis(axis=1).values)
stat_l.append(np.abs(lm_ar).max(axis=1))
stat_l.append(np.abs(lm_ar).min(axis=1))
stat_l.append(np.abs(lm_ar).mean(axis=1))
stat_l.append(np.abs(lm_ar).std(axis=1))
stat_l.append(lm_ar.max(axis=1) / (np.abs(lm_ar.min(axis=1)) + 1e-32))
stat_l.append(lm_ar.max(axis=1) - np.abs(lm_ar.min(axis=1)))
stat_l.append(lm_ar.sum(axis=1))
stat_l.append(lm_df.quantile(0.99, axis=1).values)
stat_l.append(lm_df.quantile(0.95, axis=1).values)
stat_l.append(lm_df.quantile(0.1, axis=1).values)
stat_l.append(lm_df.quantile(0.05, axis=1).values)
stat_l.append(np.mean(np.diff(lm_ar), axis=1))
stat_l.append(np.max(np.diff(lm_ar), axis=1))
stat_l.append(np.min(np.diff(lm_ar), axis=1))
stat_l.append(np.mean(np.diff(np.diff(lm_ar)), axis=1))
stat_l.append(np.max(np.diff(np.diff(lm_ar)), axis=1))
stat_l.append(np.min(np.diff(np.diff(lm_ar)), axis=1))
return np.array(stat_l).T
</code></pre>
<p>```</p>
<p><a href="/jmourad100">@jmourad100</a>
Thanks!
It may be unexpected for you, I used a just power point to create this figure. <br>
( Someone called me <code>Power Point Master</code> not kaggle master ;) )</p> |
Freesound Audio Tagging 2019 | Main features of final solution [22 on public] | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Freesound Audio Tagging 2019
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>1.) Using power(0.125) instead of log in Spectrograms;
2) Selection balanced (or unbalanced) sample of well predicted by current model noisy files (1500, 2500, 5000, 10000). Priority for rarely predicted files on the validation;
3) Different parameters of the spectrogram extraction (hop, n_mels). We used:
- rate: 44100, n_mels: {128, 256}, n_fft: 2560, hop_length: {512, 694, 1024}, fmin: 20;
- dynamic hop preprocessing: hop choose depending on the length of the sound (min_hop:16, max_hop: 1024);</p>
<p>4) Mixup with beta distribution (0.2, 0.2);
5) Class_weights in BCE calculated by the ratio of the number of classes predicted as top1 on validation;
6) Train with different seeds and choosing the best model fold by fold ^);
7) The ensemble of 7 relatively uncorrelated models.</p>
<p><a href="/daisukelab">@daisukelab</a> all of this hacks help us to build 7 uncorrelated models)</p>
<p>We used power(0.125) almost from the start of the competition and I don’t know how much it improved pipeline in comparison with log.
But selected noisy (point 2) and mixup (point 4) boost all of our models.</p> |
Freesound Audio Tagging 2019 | My write up. | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Freesound Audio Tagging 2019
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Yeah, that .76 model probably overfit like crazy too. </p>
<p>.723 to be accurate. With the limited time I had after work and family, I am happy with silver range scores. </p>
<p>It's finally weekend, that I have time to write a quick post. It might be disappointing if you were come in here to see if there is any magic or tricks.</p>
<p>As many of you know there is quite some difference even between the curated training set and the public test set. For example, those almost duplicated 'Mr. Peanut' clips were very bad and would confuse the model on the test set. For the noisy set, the differences are much larger. The slams seem to be basketball court(slam dunk?), the tap is mostly tip tap dance, the surf and wave clips are much windy. It also has notably much more human speakings in non-human sounds labeled clips. I also tried many methods to use the noisy set, but with limited success reaching only .72.</p>
<p>My .76 model is fine-tuned to overfit the test set. I spent my memorial day holiday weekend, hand made a tuning set with cherry-picked clips from the curated set and noisy set that sounds most similar to the public test set and used this dataset to fine tune my models. It is kind of cheating, for that, it is equivalent to me sneak peek at the exam paper and giving my model a targeted list of past quiz questions to practice right before the exam. It will do well on this particular exam, but might not do well on a new one. </p>
<p>I did not use this method as my final submission, because I don't think it solves the problem the competition is about. I did a similar thing in my previous <a href="https://www.kaggle.com/c/inclusive-images-challenge/">competiton</a> which I missed a top spot for not submitting a fine-tuned model.</p>
<p>So quickly about my solution without the fine-tuning. It is Conv nets trained with mixup and random frequency/time masks on curated set and pseudo-labeled datasets. </p>
<p>I have two pseudo-label sets. Here is how I generated them. For each audio clip, I run my model over sequences of windows on it. For each window, I calculate a ratio of the top k activations vs the sum of all activations. I call this ratio the signal-noise ratio. My first pseudo-label set consists of those windows with the highest ratios and associated most confident labels. My second pseudo-label set consists of full clips, I average activations of windows with highest ratios to be the labels. During training, I am zipping the curated training set and the two pseudo label sets, calculate loss separately and linearly combine them to do backward flow. </p>
<p>On inference time, I do the same scanning through windows, calculating ratios, etc to generate predictions as well. I found it better than averaging random crops. But this may simply due to the fact that my model learned to focus on those strong short signals from the first pseudo label set.</p> |
Freesound Audio Tagging 2019 | Semi-supervised part of 20th solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Freesound Audio Tagging 2019
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First I want to thank researchers at CVSSP, University of Surrey, UK for their great baseline <a href="https://github.com/qiuqiangkong/dcase2019_task2">code</a>. My entire system is built upon it.</p>
<p><strong>Pre-processing</strong>
Nothing special. 32 kHz sampling rate, 500 hop size so that 64 fps. Silence trimming below <code>max - 55 dB</code>, mel size 128. Random excerpt 4-second patches from entire spectrograms. Shorter spectrograms are padded repeatedly.</p>
<p><strong>Data augmentation</strong>
1. Spec-augment without time warping
2. Time reversal with 80 more classes (binary target with 160 classes in total)
3. I'll talk about MixUp later</p>
<p><strong>Semi-supervised learning</strong>
Almost the same as <a href="https://link.springer.com/chapter/10.1007/978-3-030-20873-8_26">https://link.springer.com/chapter/10.1007/978-3-030-20873-8_26</a>. </p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1605499%2Fcc2a1c6259b8b72194abef259d524448%2Ffreesound_model.png?generation=1561783764472053&alt=media" alt=""></p>
<p>Suppose V X y_V is a batch of curated data and W is a batch of noisy data without label, then we have
</p>
<p>The backbone is 9-layer CNN, which is the same as the one in public kernel and CVSSP baseline.</p>
<p><strong>Loss function</strong>
Focal + ArcFace.
ArcFace is modified for mult-label classification. 2 reasons to use ArcFace here
1. Enlarge the margin between positive and negative, so that the threshold eta becomes less sensitive.
2. In the <a href="https://arxiv.org/abs/1801.07698">original ArcFace paper</a>, the authors state that ArcFace enforces less penalty towards samples whose feature vectors have large angle with the weight matrix than other margin-based loss. I suppose it would apply less "weight" to wrong pseudo-labels.</p>
<p>In practice using ArcFace makes my NN converge faster and have higher local lwlrap when using only curated data. I don't know what would happen if I trained the semi-supervised model without it because I started using it at a very early stage.</p>
<p><strong>Training Details</strong>
First I warm up the backbone CNN with noisy set. For the semi-supervised model, I have 6 samples from curated set and 58 samples from noisy set in each batch. Nadam optimizer, cosine annealing learning rate with linear warm-up are used. One can refer to my technical report, Table 2 for details about hyperparameters.</p>
<p><strong>Inference</strong>
Arithmetic mean on 0.25s sliding window. TTA only on clips with short duration by repeating the sliding window with Spec-augment. </p>
<p>Single model trained on entire training set gives me 0.712 lwlrap on stage 1 test.</p>
<p>Again, thank the host for hosting such interesting competition. I am still on my way to my first gold medal :)</p> |
Freesound Audio Tagging 2019 | Solution ready on the github | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Freesound Audio Tagging 2019
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>This is for you while waiting for 2nd stage result :)</p>
<p><a href="https://github.com/daisukelab/freesound-audio-tagging-2019">https://github.com/daisukelab/freesound-audio-tagging-2019</a></p>
<p><img src="https://github.com/daisukelab/freesound-audio-tagging-2019/raw/master/images/data_example.png" alt="title_picture"></p>
<p>Thank you for everybody organized this competition, and congratulations to the winners.</p>
<p>It was wonderful opportunity once again to dig deeper into audio data for scene understanding application, I could gain intuition and understanding for real world sound-scene-aware applications which I'm hoping to have in electric appliances around us in the future.</p>
<p>I'd share what I have done and how was that during this competitions other than public kernels.</p>
<p>Topics Summary:
- Audio preprocessing to create mel-spectrogram based features, 3 formats were tried.
- Audio preprocessing also tried background subtraction; focusing on sound events... not sure it worked or not.
- Tried to mitigate curated/noisy audio differences; inter-domain transfer regarding <em>frequency envelope</em>.
- Soft relabeling, by using co-occurrence probability of labels.
- Ensemble for multi-label problem; per-class per-model 2D weighting for better averaging.</p>
<p>You are welcome. Regarding 1-3, I updated after rethinking about it. And ... found something wrong also. ;)
- This is not based on specific paper, but simple idea to adapt frequency shape from one to the other.
- In image classification, we always apply ImageNet mean/std to input images. This also inspired me to do 1-3.
- Changed to call basic idea from 'frequency envelope' to 'average power spectrum' for better description. Initial idea comes from adapting frequency envelope, but what was done is actually adapting average of spectrum.
- Then I found this still needs improvement when visualized some more. You can find at the bottom part of <a href="https://github.com/daisukelab/freesound-audio-tagging-2019/blob/master/Visual_Average_Spectrum_Conversion_Gallery.ipynb">this visualizations</a>.</p>
<p>(But it was showing better performance... need more check later)</p> |
Freesound Audio Tagging 2019 | Video of the 18th place solution and general discussion of the competition | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Freesound Audio Tagging 2019
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Early this month I have told about the competition on ML trainings. You can find the video following the link below. I have described the general approach to audio processing and said about my solution as well as ideas used by the winners.
<a href="https://youtu.be/m0-JHQeLf3k">https://youtu.be/m0-JHQeLf3k</a> </p>
<p>The talk is a part of the meetup organized by ML Trainings (<a href="https://www.facebook.com/groups/1413405125598651/">https://www.facebook.com/groups/1413405125598651/</a>) – regular Moscow based meetups backed by ods.ai community. Usually the discussions about completed competitions are held in Russian, but recently some of them have been started to be held in English, so you can find some English videos on the youtube channel as well (and there will be more in the future).</p> |
Freesound Audio Tagging 2019 | [155th place] My very first competition | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Freesound Audio Tagging 2019
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, thanks a lot for the organizers - it was a fun task, especially a noisy dataset, which is full of hilarious gems.
Kaggle has rolled out new UI during early days of competition - resulting in completely missing kernel code and invisible cells.
New version of PyTorch was released</p>
<p>Those are just my outtakes, moving forward to the new competitions:
- <strong>Build a team as early as possible</strong>. Sometimes it was very hard to motivate myself to stay up the whole night after the full working day at the day job. There are plenty of tasks of various scale and everyone would find his place.
- <strong>Good data == Good model / Bad data == bad model</strong>. I've heard about this dozen of times. But it's very easy to get tunnel vision on network structure or training schedule and completely forget about input data transformations.
- <strong>Cross-validation. It was actually the first time I've used KFold</strong>. The problem was that I've stacked all predictions by simply computing the mean value. There are plenty of more advanced tactics that would've given slightly better results.
- <strong>Regularization is magic (sometimes)</strong>. Training a network is some kind of art :) I was using vanilla Pytorch and it requires a bit more in-depth knowledge comparing to Fast.ai. That's just my impression.
- <strong>Octave</strong> convolution doesn't seem to work well on spectrograms (at least I didn't see any improvements). I haven't tried <strong>CoordConv</strong>, but it would be interesting to compare results.</p> |
Freesound Audio Tagging 2019 | [8th place solution] : SpecMix and warm-up pipeline | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Freesound Audio Tagging 2019
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank for noticing it. All credits goes to him. 😉</p>
<p>Hello,</p>
<p>I released on a <a href="https://github.com/ebouteillon/freesound-audio-tagging-2019">github repository</a> my solution. It is highly inspired from <a href="/daisukelab">@daisukelab</a> notebooks for preprocessing and <a href="/mhiro2">@mhiro2</a> for his simple CNN model.</p>
<p>Keys points of this solution, 2 techniques I imagined (maybe they exist under another name 😄)
- <strong>warm-up pipeline</strong> : try to use noisy dataset as pretrain model and as semi-supervised to increase diversity in generating a set of models. In another words, first use noisy set to warmup model training, then fine tune model with curated set and then use noisy set again in a semi-supervised way.
- <strong>SpecMix</strong> : my new data-augmentation technique which takes what I consider the best from SpecAugment and mixup. It generates new samples by applying frequency replacement and time replacement on inputs and compute a weighted average on targets. More details in the github <a href="https://github.com/ebouteillon/freesound-audio-tagging-2019">README</a></p>
<p>Inference is done using an ensemble of 2 models (<a href="/mhiro2">@mhiro2</a> simple CNN model + VGG16) on 10 folds CV.</p>
<p>More details are available in the github <a href="https://github.com/ebouteillon/freesound-audio-tagging-2019">README</a>, as well code and weights.</p>
<p>You are welcomed. Good luck!</p>
<p>Your kind words are appreciated</p>
<p>My code is a bit crappy, I should refactor it and comment it. Thanks for your kernels and nice comment. :smile:
Wishing you all the best for the second stage!</p>
<p>It should be me that is thanking you for the model that I borrowed, it helped me a lot :smile:
Wishing you all the best for the second stage!</p>
<p>Thank you for your nice words.
Wishing you all the best for the second stage!</p>
<p>Hi my Kaggling friends,
Just to let you know that I updated the <a href="https://github.com/ebouteillon/freesound-audio-tagging-2019">README</a> in the repository. I tried to clarify some parts of the description, added instructions to reproduce my results, acknowledgments where due, and of course useless badges... 😄
Feedback are of course welcomed.</p>
<p>Thank you for your positive feedback. An important point to take care of is not leaking data between folds and stages during the warm-up.
Good luck for the second stage. </p>
<p>You’re welcomed. </p>
<p>Note: Updated title and happy to become master 🥳</p> |
Freesound General-Purpose Audio Tagging Challenge | 4th solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Freesound General-Purpose Audio Tagging Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>For the solution, we employed both deep learning methods and statistic features-based shallow architecture learners. For single model, only deep learning approaches are investigated, and different deep neural network architectures are tested with different kinds of input, which ranges from the raw-signal, log-scaled Mel-spectrograms (log Mel) to Mel Frequency Cepstral Coefficients (MFCC). For log Mel and MFCC, the delta and delta-delta information are also used to formulate three-channels features. Inception, ResNet, ResNeXt, Dual Path Networks (DPN) are selected as the neural network architectures, while Mixup is used for the data augmentation.</p>
<p>Using ResNeXt, our best single convolutional neural network architecture provides a mAP@3 of 0.967 on the public Kaggle leaderboard, 0.942 on private. Moreover, to improve the accuracy further, we also propose a meta learning-based ensemble method. By employing the diversities between different architectures, the meta learning-based model can provide higher prediction accuracy and robustness with the comparison to the single model. Using the proposed meta-learning method, our solution achieves a mAP@3 of 0.977 on the public Kaggle leaderboard, 0.951 on the private LB.</p>
<p>PS: You can find our code given in
<a href="https://github.com/Cocoxili/DCASE2018Task2">https://github.com/Cocoxili/DCASE2018Task2</a></p>
<p>Please cite this work in your pulications if it helps your research.</p>
<p>@article{xu2018general,
title={General audio tagging with ensembling convolutional neural network and statistical features},
author={Xu, Kele and Zhu, Boqing and Kong, Qiuqiang and Mi, Haibo and Ding, Bo and Wang, Dezhi and Wang, Huaimin},
journal={arXiv preprint arXiv:1810.12832},
year={2018}
}</p> |
Freesound General-Purpose Audio Tagging Challenge | 8th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Freesound General-Purpose Audio Tagging Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I'm describing the approach that I took for the final solution here. The model achieved 8th position on the final leaderboard, with a score (MAP@3) of 0.943289. The code is available at <a href="https://github.com/sainathadapa/kaggle-freesound-audio-tagging">https://github.com/sainathadapa/kaggle-freesound-audio-tagging</a>. (For the list of things that I tried before the final solution, <a href="https://github.com/sainathadapa/kaggle-freesound-audio-tagging/blob/master/approaches_all.md">please refer to this document</a>).</p>
<h1>Acknowledgements</h1>
<p>Thanks to <a href="https://www.kaggle.com/amlanpraharaj">Amlan Praharaj</a>, <a href="https://www.kaggle.com/opanichev">Oleg Panichev</a> and <a href="https://www.kaggle.com/agehsbarg">Aleksandrs Gehsbargs</a> for the kernels they have shared. The kernels have helped me get started on the competition. Special thanks to <a href="https://www.kaggle.com/daisukelab"></a><a href="/daisukelab">@daisukelab</a> for the sharing his observations about data preprocessing, augmentation and model architectures. His insights were crucial to my solution.</p>
<h1>Data preprocessing</h1>
<p>Leading/trailing silence in the audio may not contain much information and thus not useful for the model. Hence, the very first preprocessing step is to remove this silence. <code>librosa.effects.trim</code> function was used to achieve this.</p>
<h2>Log Mel-Spectrograms</h2>
<p>Often in speech recognition tasks, MFCC features are constructed from the raw audio data. Since the current data contains non-human sounds as well, using the Log Mel-Spectrogram data is better compared to the MFCC representation. Log Mel-Spectrogram for all train and test samples was pre-computed, so that compute time can be saved during training and prediction (Disk is cheaper when compared to GPU).</p>
<h2>Additional features</h2>
<p>Inspired by few Kaggle kernels, summary statistics of multiple spectral and time based features were calculated. Since many of these features were correlated, these features were transformed using the Principle Component Analysis (PCA). Top 350 features were used while modeling (which amount to ~97% of the total variance).</p>
<h1>Architecture</h1>
<p>The model at its core, uses the <a href="https://arxiv.org/abs/1801.04381">MobileNetV2</a> architecture with few modifications. The input Log Mel-Spec data is sent to the MobileNetV2 after first passing the input through two 2D convolution layers. This is so that the single channel input can be converted into a 3 channel input. (Thanks to the <a href="http://forums.fast.ai/t/black-and-white-images-on-vgg16/2479/12">FastAI forums for this tip</a>) The output from the MobileNetV2 is then concatenated with the PCA features, and a series of Dense layers are used before the final softmax activation layer for output.</p>
<pre><code>inp1 = Input(shape=(64, None, 1), name='mel')
x = BatchNormalization()(inp1)
x = Conv2D(10, kernel_size=(1, 1), padding='same', activation='relu')(x)
x = Conv2D(3, kernel_size=(1, 1), padding='same', activation='relu')(x)
mn = MobileNetV2(include_top=False)
mn.layers.pop(0)
mn_out = mn(x)
x = GlobalAveragePooling2D()(mn_out)
inp2 = Input(shape=(350,), name='pca')
y = BatchNormalization()(inp2)
x = concatenate([x, y], axis=-1)
x = Dense(1536, activation='relu')(x)
x = BatchNormalization()(x)
x = Dense(384, activation='relu')(x)
x = BatchNormalization()(x)
x = Dense(41, activation='softmax')(x)
model = Model(inputs=[inp1, inp2], outputs=x)
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
mel (InputLayer) (None, 64, None, 1) 0
__________________________________________________________________________________________________
batch_normalization_1 (BatchNor (None, 64, None, 1) 4 mel[0][0]
__________________________________________________________________________________________________
conv2d_1 (Conv2D) (None, 64, None, 10) 20 batch_normalization_1[0][0]
__________________________________________________________________________________________________
conv2d_2 (Conv2D) (None, 64, None, 3) 33 conv2d_1[0][0]
__________________________________________________________________________________________________
mobilenetv2_1.00_224 (Model) multiple 2257984 conv2d_2[0][0]
__________________________________________________________________________________________________
pca (InputLayer) (None, 350) 0
__________________________________________________________________________________________________
global_average_pooling2d_1 (Glo (None, 1280) 0 mobilenetv2_1.00_224[1][0]
__________________________________________________________________________________________________
batch_normalization_2 (BatchNor (None, 350) 1400 pca[0][0]
__________________________________________________________________________________________________
concatenate_1 (Concatenate) (None, 1630) 0 global_average_pooling2d_1[0][0]
batch_normalization_2[0][0]
__________________________________________________________________________________________________
dense_1 (Dense) (None, 1536) 2505216 concatenate_1[0][0]
__________________________________________________________________________________________________
batch_normalization_3 (BatchNor (None, 1536) 6144 dense_1[0][0]
__________________________________________________________________________________________________
dense_2 (Dense) (None, 384) 590208 batch_normalization_3[0][0]
__________________________________________________________________________________________________
batch_normalization_4 (BatchNor (None, 384) 1536 dense_2[0][0]
__________________________________________________________________________________________________
dense_3 (Dense) (None, 41) 15785 batch_normalization_4[0][0]
==================================================================================================
Total params: 5,378,330
Trainable params: 5,339,676
Non-trainable params: 38,654
__________________________________________________________________________________________________
</code></pre>
<h1>Train data generation with augmentation</h1>
<p>Both the train and test audio files are of varied length. Model is designed to make use of this particular nature of the dataset. Use of Global average pooling before the Dense layers allows the model to accept inputs of various lengths. While training, at each batch generation, a random integer between the limits is chosen. 25th and 75th percentiles of train file lengths are used as min and max limits respectively. Shorter length samples than the chosen length are padded, while a random span of chosen length is extracted from longer length samples.</p>
<p>The common augmentation practices for Image classification such as horizontal/vertical shift, horizontal flip were used. In addition to this, Random erasing was also used. Random erasing or Cutout selects a random rectangle in the image, and replaces it with adjacent or random values. For more information about this data augmentation technique, refer to the <a href="https://arxiv.org/abs/1708.04896">original paper</a>.</p>
<p>Mixup is the final augmentation technique used while training. Mixup essentially takes pairs of data points, chosen randomly, and mixes them (both X and y) using a proportion chosen from Beta distribution.
> One intuition behind this is that by linearly interpolating between datapoints, we incentivize the network to act smoothly and kind of interpolate nicely between datapoints - without sharp transitions. (Quote from <a href="https://www.inference.vc/mixup-data-dependent-data-augmentation/">https://www.inference.vc/mixup-data-dependent-data-augmentation/</a>)</p>
<p>While I haven't ran exhaustive trials to say for sure, anecdotally, each of the data augmentation have helped in improving the loss.</p>
<h1>Training</h1>
<p>Ten folds (stratified split as there is class imbalance) were generated. For each fold, a model of similar architecture but that uses only Log Mel-Spectrogram data is trained. The weights from this model are loaded into the whole model (that uses both mel and pca features), and training process continues. Attempt to train the model without using this two-stage approach didn't result in as good a model as before.</p>
<h1>Predictions on test data</h1>
<p>Six different lengths selected at equal intervals between the 25th and 75th percentile of train file lengths. To make use of the higher amount of information present in longer length samples, at each length, predictions are generated five times. Each time a random span of specified length is extracted from longer (than specified length) length samples. 10 (folds) x 6 (lengths) x 5 (tries) gives 300 sets of predictions for the test data. All of these predictions were combined using geometric mean, and top 3 predicted classes for each data point are selected for submission.</p> |
Freesound General-Purpose Audio Tagging Challenge | Best Single Model: 0.915 (Private LB score) with nearly zero pre-processing | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Freesound General-Purpose Audio Tagging Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Yes, there has been some pretty evident LB shake-up. Although I did anticipate this - given how the Public and Private LB split has been performed... </p>
<p>First of all, congratulations everyone! And thanks to the organizers - this was a really well organized competition. Even though I joined quite late (I think around 3 weeks back) and this was my first venture into the Audio domain, my learning curve has been incredibly high! I look forward to continue my work in this domain. </p>
<p>My best single model with nearly zero pre-processing scores 0.915 in the private LB. </p>
<p>As part of pre-processing, only silence removal and chunking was used. (No label refinement, no mixup).</p>
<p>Private LB: 0.915</p>
<p>Public LB: 0.921</p>
<p>Model: CNN</p>
<p>Number of Parameters: Less than 1.5M</p>
<p>Number of folds: 5</p>
<p>I think apart from the architecture, its strength comes from the feature engineering, which followed a little-bit different approach than what has been discussed here in the forums. </p>
<p>The same process with mixup scored 0.907 in Private LB, 0.925 in Public LB. </p>
<p>I wish I had submitted these results in the technical report and final submission :'(</p> |
G-Research Crypto Forecasting | 13th Place (Final) 1st Place (6 Weeks In) Final Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: G-Research Crypto Forecasting
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><a href="https://www.kaggle.com/tomforbes/1st-place-6-weeks-in-final-submission" target="_blank">View the final submission notebook here.</a></p>
<p>Final Update - Congratulations to all the winners! While this comp was a bit of a rollercoaster, I'm very pleased to get my first competition Gold. Disappointing to just miss out on the top 10 after being there so long, but thats always possible with this type of challenge.</p>
<p>Original post below.</p>
<h3>Initial Comments</h3>
<p>First of all thankyou to G-Research and Cambridge Spark for hosting this competition and to the Kaggle staff for making everything run relatively smoothly. I found it so interesting and more open ended than other comps I've competed in, in my short Kaggler career.</p>
<p>It goes without saying that while I'm in 1st Place now there is no guarantee it will stay that way in the next 6 weeks, but I'm confident I'll be there or thereabouts. Hopefully you find this solution useful regardless of the final outcome.</p>
<p>Elephant in the room - there has been a lot of concern about the validity of the final LB results and requests to release the final test data after the competition ends so that competitors can validate their final scores. I think this would be a good idea and should put any concerns to bed. If it's possible & not too much effort for Kaggle to accommodate the request please consider it. However, as far as im concerned my current score seems reasonable based on all the scores I achieved in testing. I have no reason to believe any of the LB scores are incorrect until there is evidence to the contrary.</p>
<p><strong>I would encourage those who feel their score is unexpected share their solutions as the Kaggle community might be able to debug any issues/misunderstandings.</strong></p>
<p>It is obviously difficult to accurately predict crypto prices, but it's possible to find <em>some</em> signal - hopefully this notebook will shed some light on how I got mine.</p>
<p><em>Disclaimers</em></p>
<ul>
<li><em>this was not my exact final submission but very similar</em></li>
<li><em>I have removed the dataset containing my pretrained models and scalers - so some commands will error</em></li>
</ul>
<h3>Summary</h3>
<ul>
<li>17 features with lagged and timestamp averages for all models.</li>
<li>Ensembles of LGBM and Keras NN models.</li>
<li>Target Engineering and prediction switching.</li>
<li>Some training on extra cryptos outside competition scope.</li>
</ul>
<p>Test Scores (no lookahead bias):</p>
<ul>
<li>0.0565 best score on original testing period (Jun 13 2021 - Sep 21 2021)</li>
<li>0.0475 best score on supplemental_train.csv update. (Sep 21 2021 - Jan 24 2022)</li>
<li>0.0465 best single model score (Jun 13 2021 - Sep 21 2021)</li>
</ul>
<h3>External Data</h3>
<p>I downloaded free public data from multiple exchange apis (Binance, FTX, Coinbase, Kucoin… etc.) to see if this extra data would improve my models. Some of my final models were trained on extra data. Specifically, using some currencies not included in the competition (e.g. XRP, ZEC…) from Binance seemed to provide a consistent small improvement over multiple timeframes.</p>
<h3>Feature Engineering</h3>
<p>I settled on a group of 17 features for all models. There are 8 lagged features - <strong>a simple mixture of EMA's, historical returns and historical volatility over various lookback periods</strong> - and these features were averaged across timestamps to produce 8 more. Asset_ID was also included. It was tricky to find a group that performed the best consistently across different time periods, adding and removing features from this set seemed to reduce performance. Im sure this isn't the optimal feature set, but this group seemed to work well enough.</p>
<p>It was also important to perform some kind on binning on the features, especially for training the LGBM model. The commonly used reduce_mem_usage function and some rounding functions seemed to provide a suitable amount of bins. I found binning to 500-1000 unique values worked well for any given continuous feature.</p>
<h3>Target Engineering</h3>
<p>I think a crucial part of this competition was manipulating the target. Thanks to the work of <strong>@alexfir</strong> and other Kagglers <a href="https://www.kaggle.com/alexfir/recreating-target" target="_blank">we found out how the target was being calculated</a>.<br>
I thought it would be useful to split the target into two components:</p>
<ul>
<li>The forward 15 minute return of an asset</li>
<li>The beta component, where we calculate the mean of forward 15 minute returns for all assets and incorporate past 15 minute returns for the previous 3750 timestamps</li>
</ul>
<p>As <strong>@gengdaiziwang</strong> illustrates <a href="https://www.kaggle.com/gengdaiziwang/are-we-training-models-on-an-inaccurate-target" target="_blank">in his helpful notebook</a>, we can see that in the case where an observation is missing for a given asset, the beta component is automatically set to 0.</p>
<p><a href="https://postimg.cc/Dm1B6rXM" target="_blank"><img src="https://i.postimg.cc/mZ5JbV2B/download.jpg" alt="download.jpg"></a></p>
<p>But this makes it a completely different target! A model trained on the target given as standard will have to deal with training on a target that randomly switches between a 15 minute forward return for one asset vs what is essentially a 15 minute forward return <em>relative to the other assets</em> - very different things.</p>
<p>The approach I took was to create recreate two targets to represent these two different cases. Then I could train separate models on these recreated targets which would learn the two cases much more effectively.<br>
<a href="https://www.kaggle.com/tomforbes/target-engineering" target="_blank">See an implementation of this target engineering</a>.</p>
<p>Now, having two models trained on different targets, I can alter my predictions to match the target produced by the api. The api provides all the information i need to figure out whether the beta component will be == 0 because if an asset is missing from an iteration i know that for the next 3750 iterations the beta component == 0 for this asset. <strong>Therefore my next 3750 predictions will come from a model that was training on a target constructued without a beta component.</strong></p>
<p>This target engineering and dual model method added roughly 0.01 to my score on the original test period, although this did vary quite significantly for other testing periods.</p>
<p>To summarise as simply as possible:</p>
<ul>
<li>TargetZero = A target based on 15 minute Forward Return only</li>
<li>TargetBeta = TargetZero + Beta Component</li>
<li>ModelZero = Model(s) trained on TargetZero</li>
<li>ModelBeta = Model(s) trained on TargetBeta</li>
</ul>
<p><a href="https://postimg.cc/xJJFnBqf" target="_blank"><img src="https://i.postimg.cc/ZRVkFSSd/gres.png" alt="gres.png"></a></p>
<h3>Models</h3>
<p>I found LightGBM worked well and was easy to experiment with, i used the weighted correlation evaluation metric and fairly out of the box hyperparameters, tuning didnt seem to add much and wasn't consistent across time periods.</p>
<p>I also used a Keras NN heavily influenced by <strong>@lucasmorin's</strong><br>
<a href="http://https://www.kaggle.com/lucasmorin/online-fe-nn-loss-shap" target="_blank">excellent notebook</a>. This just worked really well for me out of the box and i couldnt find many ways to improve the architecture or hyperparameters.</p>
<p>Ensembling these two models worked well and added an extra 0.005 to my score (roughly) although the improvement varies in different market regimes. I also experimented with ensembles using catboost and XGboost but the predictions from most gradient boosting models were too highly correlated to provide much ensembling benefit. I found the diversity in predictions between NN and GB were a good match.</p>
<p>I used a fairly simple CV method for training and testing. My folds were based on timestamp values similar to the below:</p>
<ul>
<li>Training Fold 0: 1514764860 - 1570000000 </li>
<li>Training Fold 1: 1514764860 - 1580000000 </li>
<li>Training Fold 2: 1514764860 - 1590000000 </li>
<li>Training Fold 3: 1514764860 - 1600000000 </li>
<li>Training Fold 4: 1514764860 - 1610000000</li>
<li>Validation Fold 0: 1570000000 - 1580000000</li>
<li>Validation Fold 1: 1580000000 - 1590000000 </li>
<li>Validation Fold 2: 1590000000 - 1600000000 </li>
<li>Validation Fold 3: 1600000000 - 1610000000 </li>
<li>Validation Fold 4: 1610000000 - 1620000000 </li>
<li>Test Fold A: 1615000000 - 1623542400</li>
<li>Test Fold B (Supplemental Train Update): 1623542400 - 1643000000</li>
</ul>
<p>Since this public lb was useless in this competition, it was essential to construct a robust CV framework to have a reliable benchmark for making improvements to models. It was also important to have test folds in several different market regimes to avoid overfitting to one period.</p>
<p><strong>My best single model was a NN trained on fold 0 data only (surprisingly).</strong></p>
<p>I was fairly selective on picking models to include in the final submission as some folds just trained much better than others and had more consistent outperformance on unseen data.</p>
<h3>Submission</h3>
<p>This is almost the exact notebook I submitted for my final predictions. I used lists to store historical data for feature calculations and did most calculations using numpy or lists, avoiding pandas at all costs.</p>
<p>The submission completes in roughly 7 hours.</p>
<h3>What didnt work</h3>
<ul>
<li>Layering extra data from different exchanges for the same asset. E.g training on BTC data from Binance + Coinbase + FTX + G-Research version. Didnt harm but didnt improve models significantly either.</li>
<li>Framing as a classification problem - poor results</li>
<li>Hyperparameter tuning LGBM</li>
<li>PCA features</li>
<li>Using all 14 assets features per row / predicting on 14 targets.</li>
<li>Meta modelling GB and NN predictions</li>
</ul>
<h3>With more time id try</h3>
<ul>
<li>Extra Feature engineering</li>
<li>Pytorch implementation of keras NN for ensembling</li>
<li>LSTM</li>
<li>Other model architectures</li>
</ul>
<p>I was only interested in OHLC data so didn't need to get the more hard to source features like VWAP. This is pretty easy to find if you google most exchanges have public api docs. E.g. <a href="https://www.binance.com/en/landing/data" target="_blank">binance</a></p>
<p>Yeah as i said its 1st place 6 weeks in - I could slip a few of places by the end but thought i'd share what i'd written up</p>
<p>Maybe, but then I imagine most people trained their models on the target as originally constructed which had plenty of beta == zero, so whatever results they got in testing should probably still apply unless there is an unusual amount of missing data. It seemed to be the trend that the most recent data had the least missing data</p>
<p>Thanks Kyle! I have outlined some of the training and validation folds i used in this post. For example my best model was a NN trained on data from 1514764860 - 1570000000 (timestamp). I choose whichever folds performed best overall on my testing folds. I didn't use any supplemental_train data for training, only testing. I didn't train any models that included data after ~1610000000</p>
<p>Thanks Manuel! Yeah i used roughly 10000000 timestamp periods. I found using larger validation periods didn't make too much difference to training but i preferred to keep a smaller validation set - so I could keep my test sets large.</p>
<p>Yeah i tried using beta as a feature but i didn't get a lot out of it. Think the benefit of splitting out the models is the target is consistent and allows for better training </p>
<p>Thanks! Yeah only 17 features for all models. Yes for some models i kept the noise and some seemed to train better without. It certainly deserves a gold!</p> |
G-Research Crypto Forecasting | 14th Place Final Submission | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: G-Research Crypto Forecasting
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>This is <a href="https://www.kaggle.com/code/takeshimorimura/14th-place-final-submission" target="_blank">my final submission</a> (interference part only). I'd appreciate any comments or feedbacks.</p>
<p>I spent most of the time to reduce running time, so I didn't expect to get this high place. Maybe I was lucky😏</p>
<p>Thanks!</p> |
G-Research Crypto Forecasting | 2nd place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: G-Research Crypto Forecasting
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Good questions. You too guessed a topic I decided to leave out. I tried all of it. CV and feature importance guided the way.</p>
<p>That's a good question. You guessed a topic that I decided to leave out. It was more sophisticated.</p>
<p>Hi everyone,</p>
<p>This is great. I'm relieved to have maintained second place in the final update, as I was nervous that anything could happen. By my estimate, there was a 60% chance I would drop below 2nd on the 6th update, due to the noisy nature of this kind of data. I also estimate that on a 7th update, Meme Lord Capital would have a 75% chance of maintaining first place. They finished with a big lead on everyone. Good job, Meme Lord Capital!</p>
<p>I'd like to thank G-Research for sponsoring the competition, and everyone involved in organizing and hosting it.</p>
<h1>Introduction</h1>
<p>In this writeup I'll strive to provide insight into my methods, but without giving away model details that could be profitable for the host. When a good financial indicator becomes common knowledge, everyone uses it, therefore it loses profitability. As such, I won't be sharing my code or any specific insight on features. </p>
<p>I decided to enter this competition because it concerns asset price prediction using actual market data (not pre-engineered features). That's the kind of data I like: having an explicit time dimension, many interacting entities, a lot of noise, and a need for creative method development. In 2014-2017, I researched stock price analysis and prediction in my own time. It didn't end up being profitable long-term as a single DIY investor, but I did learn a lot about predicting time series and asset prices.</p>
<h1>Toolkit</h1>
<p>My toolkit consisted of CPython, Numba, Jupyter, Pandas, LightGBM, Matplotlib, and scikit-learn. Using the Numba compiler instead of CPython for feature generation resulted in high performance. Doing so, I was able to experiment with a wide variety of features, utilizing the entire dataset and iterating quickly.</p>
<h1>Cross validation</h1>
<p>Setting up good N-fold cross validation (CV) is essential. We should try to see if our ideas fail in the most real environment we can make. Every time we have a research question, it should (eventually, after a period of creative exploration) be answered in that manner. If we don't have a good CV setup, our decisions will mostly be mere guesses. Good N-fold CV is easy to set up, taking less than 100 lines of code, without using scikit-learn or any timeseries framework. </p>
<p>In this competition I used 6-fold, walk-forward, grouped cross validation. The group key was the timestamp. In a typical setup, train folds were 40 weeks long, test folds were 40 weeks long, there was a gap of 1 week between test and train folds, and the ends of training folds were incremented by 20 weeks each fold. </p>
<p>I chose to overlap my folds so that they could be long, but I could still have 6 folds. With non-overlapping folds, I would have to decide between many short folds or a few long ones. </p>
<p>The advantage of having many folds is that average CV scores will have lower variance. On the other hand, the advantage of having long folds is that the model sees more data, so you get a better picture of how it will perform with the full dataset. Instead of choosing between many short folds or a few long ones, I let them overlap so I could have many long folds. I could have used more than 6 folds, or run CV multiple times, changing seeds. I didn't do these things because CV score variance was decently low, and CV was already almost too slow.</p>
<p>(If you don't pay attention to CV variance, you could waste days optimizing your model, only to find that your early decisions were based on noise and incorrect. An easy way to see if your CV scores have too much variance is to look at a plot of CV score vs. a parameter you're tuning. A good plot will usually be smooth, with a knee and a plateau, or maybe a peak or a valley. If the plot looks too noisy to clearly see those things, then try it again, with different seeds, and see if you get different results. If you do, then your CV results are too noisy. You can try using more folds, or running CV several times with different random seeds and averaging.)</p>
<p>The one week gap between train and test data was to prevent CV results from being too rosy. With no gap, a model can cheat at the the beginning of the test period, because the end of the train period is very similar. There was no gap between train and test data in the final submission.</p>
<h1>Public leaderboard scores</h1>
<p>As I was developing my model, naturally I wanted to have an idea of how it would place against competitors. Because public leaderboard scores were unrealistically high due to probing and overfitting, I had to find a way to interpret them. So I looked at masters and grandmasters with "low" scores. I figured these competitors were good enough to submit good kernels, but had been careful to keep the public leaderboard period out of their training data. I did not expect true scores to go above 0.1, based on experience and discussions here. Plotting the sorted scores of the "low" scoring masters and grandmasters, there was a clear plateau around 0.08, as I recall. That was the score I aimed to beat in my local CV. </p>
<p>Of course, I did not use the public leaderboard as a guide in any optimizations. That's always a bad idea (see the paragraph above about CV variance). I only made a total of two submissions in the competition.</p>
<h1>Feature engineering</h1>
<p>Because my labor and computational resources are finite, it’s important to know where to direct them. As I developed my model, feature engineering was guided by feature importance. As I made new features, I focused further effort on developing features sets that already performed well (had high importance), or for which transformations led to easy increases in importance. </p>
<p>As I said in the introduction, I won't be discussing specific feature insight or give my code. I'll only be giving my two cents on methods.</p>
<h1>Learner</h1>
<p>The learner was trivial: a LightGBM GBDT regressor with squared loss. There was no ensembling other than the gradient boosting in GBDT. The only parameters I changed from defaults were the number of estimators, number of leaves, and the learning rate. There was no regularization, augmentation, or feature neutralization. I checked with CV whether these things would help, found that they wouldn't, and decided to go with a simpler submission.</p>
<p>LightGBM parameters were not tuned exhaustively. Tuning was taking a long time, and CV results were indicating that regularization would have little effect on model performance. So I decided to not push on that wall and instead placed my focus back on feature engineering. CV is essential in making these resource allocation decisions.</p>
<h1>Submission kernel</h1>
<p>Because this was a code competition with a large dataset, performance was crucial. Our kernels were limited to running in 9 hours or less, using up to 16GB of RAM. As we all can attest, it's easy to bump into those limits.</p>
<p>The hardest limitation I dealt with was the 16GB RAM limit. I wanted to train with the entire dataset, because CV had shown me that scores just kept improving with longer training data. To use the entire dataset, I had to find and fix code that resulted in unnecessary RAM copies of arrays. The Pandas operation df.update() was a frustrating offender. The solution was to select small subsets of data, use df.update() on those subsets, then append to a list of dataframes, and then at the end, concatenate the list into one dataframe.</p>
<p>On the other hand, my kernel is really fast. It takes 25 minutes to import data, generate features, and train the model. When submitting, predictions are generated at a rate of about 50 timestamps per second, not including the overhead of the submission mechanism. Generating predictions for 2 weeks of data takes around 10 minutes, on top of the the 25 minutes to train the model.</p>
<p>Speed wasn't a limiting factor in my submission because I used Numba to write feature generation code and my learner was simple. My main goal was to have a codebase that allowed me to test feature ideas quickly and easily. The fast submission kernel was a side effect.</p>
<p>It was mostly feature engineering, and yes, finding the right features was key.</p>
<p>Absolutely, there are a lot of things.</p>
<ol>
<li><p>All the way up to the deadline, feature engineering was the best avenue for improvement. So I'd continue developing features. I'd want to improve my codebase a little first.</p></li>
<li><p>I'd revisit some decisions I made based on CV. With more computational resources, CV scores would have less variance, and I might be able to see, for example, that regularization actually improved the model a little.</p></li>
<li><p>I'd try more complex models, especially using autoencoders, convolutional NN's, recursive NN's, etc.</p></li>
<li><p>There's a feature engineering research idea I'd like to try, but I probably shouldn't discuss it publicly.</p></li>
<li><p>This competition got me interested again in something I was researching a while ago. Essentially, I"m thinking of supervised learning problems where the data is nonlinear, high dimensional and has a lot of noise. There are toy datasets that seem like they should be easy to fit, but quickly become challenging as noisy dimensions are added. Neural nets do the best on these toy datasets, with decision forest methods coming in a distant second, and basically every other method fails with even a few dimensions. It's interesting to try to make new models that do better than neural nets in that playground.</p></li>
<li><p>I particularly enjoy a certain area of research that has applications to asset price prediction. That's unsupervised data decompositions in general (PCA being an example), but especially if there's any connection to statistical mechanics, quantum mechancs, information theory, or emergence.</p></li>
</ol>
<p>Giving a recommendation without knowing your background would be difficult. You could search "best timeseries book" and find one that's highly recommended, specific to your programming language and application domain. I come from a physics background, and timeseries are a natural extension of that, so I learned by experimenting and reading here and there. </p>
<p>Timeseries learning is essentially supervised learning that respects causality. You avoid doing anything in your algorithm that would depend on data from the future. That's both harder and easier than it sounds. Technically, it's easy because you're avoiding doing something. In practice, you can sometimes take shortcuts and make mistakes that end up causing your algorithm to need future data. Learning how to avoid these pitfalls is one of the main things to learning timeseries forecasting. You want to be as creative as possible, clustering assets by market, etc., without making these mistakes and breaking causality. You have to develop a way of thinking about it. The only way I know to do that is with experience, but there might be a resource that teaches that way of thinking.</p>
<p>There are some things to learn that are very useful for timeseries, like de-trending data by differencing, lagged features, etc. A good timeseries book should show you these things, but a lot of it is almost self-suggesting, based on understanding the data you're modelling. For example, you could learn on your own about de-trending data by diagnosing a timeseries algorithm that didn't work (and then you'd also learn about diagnosing timeseries models). </p>
<p>Because understanding the data is important, domain knowledge is important.</p>
<p>The rest of it (actually a huge portion of it) is just machine learning. Every technique, method, or algorithm that can be applied to a supervised or unsupervised learning problem can be used for timeseries prediction.</p> |
G-Research Crypto Forecasting | 37th place approach in the G-Research Crypto Forecasting Competition. | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: G-Research Crypto Forecasting
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks! Yes that Human Learning never ends :)</p>
<p>Congrats to the winners, and to the host for a great and interesting competition!<br>
With some luck I made it to the 37th place, below is a short summary 🙂</p>
<p>In the G-Research Crypto Forecasting Competition I have done more human learning than machine learning, 99% feature engineering and data creation. I can now more of translations between Pandas, Polars and Rapids and tons of technical analysis. It’s always a human learning in machine learning. 👍🙂</p>
<p>Common way in general trading is using different TA indicators and strategies, so my approach was to create and tune different TA indicators in a MTF(MultiTimeFrame) strategy. However, the 0.25s/14 assets with the hw in use and time limit for total 3-month re-run was the biggest challenge, as we all struggled with, I guess. My ideas were to make a more bullish model and one that handles both markets. I used maximum 1440 minute period TF to minimize timeout.</p>
<p><strong>TA indicators and feature engineering</strong></p>
<p>Short summary as it was the biggest part:</p>
<p>I created and tuned TA indicators and features for every one of the 14 assets, this because maybe different TA techniques work better in one and worse in another and to have a more differed total portfolio in terms of input for the forecasts. Some feature creations also are more time expensive than other so choosing bigger and smaller feature engineering cross the test time managed the time out. Have also heard somewhere that it’s easy to overfit with too many indicators so I also had that in mind.</p>
<p>In crypto assets Litecoin and IOTA not much worked of the classic ones so here I use as much of TA the one can fill to get at least a score on the right side, used many of the original ones from baseline to get a balanced total approach to not get stuck in one idea.<br>
To handle the outliers and extreme values I used calculated quantile cutoff for every asset. Maybe not the best and maybe can worsen the best scores but I hoped to balance and smoothing it more.</p>
<p><strong>Models</strong></p>
<p>Here I didn’t used much effort this time, afraid of overfit even more, so here I used an ensemble of one light tuned Xgb and one untuned Xgb but with different seeds. Also added two lgbm with different seed and minor differences. In total 4 models ensemble to the prediction.</p>
<p><strong>Training</strong></p>
<p>As many of us we had many solutions and ideas, but many went to trashcan as they didn’t meet the time limit, so I used the same solution for both submissions but trained with different datasets.</p>
<p>First submission trained only with the public period dataset and tuned for a more bullish market with a believe that TA works better in that market and second solution trained with the complete dataset with no overlap until evaluation period.</p>
<p>As validation during training, I manually picked different market condition to validate on, bullish, bearish and between.</p>
<p><strong>Handling test time limit</strong></p>
<p>I tried Pandas, Rapids and Polars and ended up with Polars that also have support for the Talib library used for many of indicators.</p>
<p><strong>That’s it!</strong> 😊 </p>
<p><em>Disclaimer</em><br>
<em>Disclaimers are always in place dealing with the subject if someone take it out of context, this approach is only for this competition and shouldn’t be taken for a real working end-to-end solution. So remember that all ideas and information presented are not recommendations or financial advice. Use at your own risk.</em></p>
<p>No postprocessing of the predictions, all work is done before the input 😊</p> |
G-Research Crypto Forecasting | 3rd place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: G-Research Crypto Forecasting
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks for your question.</p>
<p>I used such confusing code to avoid using pandas to accelerate inference.<br>
The original and simpler code was as follows;<br>
<code>df[f'log_close/mean_{lag}_id{id}'] = np.log( df[f'Close_{id}'] / df[f'Close_{id}'].rolling(lag).mean() )</code></p>
<p>First of all, I would like to thank G-Research, kaggle and all competitors. I have learned a lot through this interesting competition. It was very difficult for me (actually, it was the third machine learning challenge in my life) and without the knowledge shared by all the kagglers, I would not have been able to handle it.</p>
<p>Also, since this is a competition with a strong element of luck, it was only by sheer luck that I was able to finish in third place, but I am glad that my rank was somewhat stable in the last few updates.</p>
<p>In coding, I referred to the knowledge of kagglers. I am especially grateful to <a href="https://www.kaggle.com/jagofc" target="_blank">@jagofc</a> for providing <a href="https://www.kaggle.com/code/jagofc/local-api/" target="_blank">local api</a>. Without <a href="https://www.kaggle.com/jagofc" target="_blank">@jagofc</a>'s local api, I would not have been able to successfully submit even once.</p>
<p>Here is my solution. If you have any suggestions for improvement or anything else you think I should know, I'd be glad to hear from you!</p>
<p>The link to the notebook is as follows;</p>
<ul>
<li>training: <a href="https://www.kaggle.com/code/sugghi/training-3rd-place-solution" target="_blank">https://www.kaggle.com/code/sugghi/training-3rd-place-solution</a></li>
<li>inference: <a href="https://www.kaggle.com/code/sugghi/inference-3rd-place-solution" target="_blank">https://www.kaggle.com/code/sugghi/inference-3rd-place-solution</a></li>
</ul>
<h3>Overview of my model</h3>
<p>The characteristics of my model are as follows</p>
<ul>
<li>Only 'Close' is used.</li>
<li>The model is trained for each coin using a common set of features for all the coins.</li>
<li>The difference between the change of each currency and the change of all currencies is provided as features.</li>
<li>Single model of LightGBM (7-fold CV)</li>
</ul>
<p>Considering the definition of the forecasting target in this competition, I felt it was necessary to prepare features with information about the entire market. I also thought that some currencies might be affected by the movements of other currencies, so I made it possible to refer to information about other currencies as well. Since I thought that memory and inference time would become more demanding with this kind of processing, I reduced the amount of data to be used. Specifically, I considered 'Close', which is used to calculate the target, to be the most important, so I decided to use only it. Even so, the ensemble could not be performed because of the limited inference time (and lack of coding skill).</p>
<p>For CV, I used <a href="https://www.kaggle.com/code/nrcjea001/lgbm-embargocv-weightedpearson-lagtarget/" target="_blank">EmbargoCV</a> by <a href="https://www.kaggle.com/nrcjea001" target="_blank">@nrcjea001</a>.</p>
<h3>About the data used for training</h3>
<p>The start date of train data differed greatly among currencies. Since my model deals with the average of the changes of each currency, I considered it undesirable for the existing currencies to differ significantly between the training period and the evaluation period. As I expected that all currencies would have few missing values during the evaluation period of the competition, I decided not to use all of the train data, but to use the data from the period when there were enough currencies present.</p>
<p>The selection of the starting date was done by looking at the CV scores. However, this was a mistake in hindsight, since it meant that I was comparing CV scores across different data.</p>
<p>Also, each currency had several long and short blank periods. I attempted forward fill to prevent missing data as a result of rolling. On the other hand, I thought that forward fill for the entire period might cause a decline in data quality when there is a long blank period, so I set a limit on forward fill. In the evaluation phase, the code was designed to have forward fill without a limit, but I thought this would not be a problem since there are no long blank periods in the evaluation phase.</p>
<h3>Feature engineering</h3>
<p>Since the value of cryptocurrencies is increasing, I tried to make sure to pick up the magnitude of the change independent of the evaluation period.</p>
<p>For 'Close', I prepared two features for multiple lag periods: the log of the ratio of the current value to the average during the period, and the log of the ratio of the current value to the value a certain period ago. For these, I took the average for all currencies (Due to missing data, no weighted averaging was performed). In addition, the difference between each currency and the average of all currencies was also prepared as a feature. As a result, this feature seems to have worked well.</p>
<h3>Dealing with time limit</h3>
<p>The most difficult part for me was the time limit. When inference is performed, it should be sufficient to generate only one row of features, but my programming skills did not allow me to do this well, so I gave up on this and generated features for all data. So, I tried to avoid using pandas as much as possible to speed up the process, and managed to finish the inference within 9 hours. To be honest, I was quite worried about the timeout at the final update.</p>
<h3>What I would have worked on if I had more time</h3>
<ul>
<li>Speed up feature generation to save time and perform ensemble.</li>
<li>Learning with less missing data (external data?)</li>
<li>Parameter optimization<br>
The first successful submission was a week before the end of the competition, so I have not been able to optimize much.</li>
</ul>
<p>I would like to continue to improve so that I can achieve results in competitions where the element of luck is small. See you at the next competition!</p>
<p>Thank you for celebrating!</p>
<p>Thanks for your comment!</p>
<p>Thanks for your message! I am glad you found it useful!</p>
<p>I greatly appreciate your local_api!<br>
Thank you once again!</p>
<p>Thanks for your comment!<br>
I hope my code is helpful :)</p>
<p>I have greatly benefited from your code. Thank you!</p>
<p>Thank you for celebrating with me!<br>
I look forward to competing in another competition!</p>
<p>Thanks for your comment!<br>
I am glad you found it useful :)</p> |
G-Research Crypto Forecasting | 7th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: G-Research Crypto Forecasting
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, thank you G-Research and Kaggle for hosting the competition. I really like the data provided in this competition, actual market data give us more room for creative method development and is much more interesting compared with anonymized tabular data.</p>
<p>My approach is 99% on modeling and the only feature I added is the time of day. Here is a very brief description of my model, the final submission is an ensemble of 4 models trained with different sequence lengths.<br>
<img src="https://i.imgur.com/ECta6Oe.png" alt=""></p>
<p>No, I didn't delete any content. If you can't see the picture, it could be because Imgur is blocked by your internet provider.</p>
<p>Congrats on 2nd place!</p>
<p>Thanks for your kind words, my friend!</p>
<p>Thanks for your question! I always start by checking the data shape (i.e. is the data tabular? 1d? or 2d?), then pick some suitable neural network layers (e.g. LSTM/ Conv/ MHA) for it. I would say Transformer is a very strong candidate for data >= 1D, definitely give it a try.<br>
I will perform feature engineering only when the model cannot 'see' the information from the input. For example, Transformer can't handle input with a very long sequence length, so we will have to create features to capture long-term information.</p>
<p>I use 3-fold Grouped CV and the key is timestamps.</p>
<p>Thanks for the question! I use standardization for input scaling. I find that the features scaling could be very different between different assets (e.g. ETH vs BTC) and different timestamps (2020 vs 2021), so I calculate the mean and std for each asset and only use the data from the input sequence (90 data points) to make the model more stable and robust.</p>
<p>Thanks for the question! The scale for features is very different between different assets and different timestamps, I decided to normalize input using only the data from the input sequence with the same asset (90 data points for each feature) to compute the mean and std.</p> |
G-Research Crypto Forecasting | 9th Place Solution - Inference | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: G-Research Crypto Forecasting
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Actually it was something like a domain knowledge. Now, I'm not a trading expert or something but I know that, out there, people use this sequence very heavily, so, when I decide window sizes, I thought it would be better to use them instead of other random values.</p>
<p>I'm glad if it helps</p>
<p>Hello everyone,</p>
<p>Firstly, I want to thank you everyone in this competition, it was a great journey for me and I'm very happy to win a gold medal for the first time.</p>
<p>I actually shared my inference <a href="https://www.kaggle.com/code/bturan19/lgb-3fold-rollingagg-lagtarget-submissioninference/notebook" target="_blank">notebook</a> long ago. Now I want to give some information about it..</p>
<p>My most precious feature was Hull moving average:</p>
<pre><code>def moving_average(a, n=3) :
ret = np.cumsum(a, dtype=float)
ret[n:] = ret[n:] - ret[:-n]
return np.append(np.array([1]*n), ret[n - 1:] / n)[1:]
#@jit(nopython=True)
def calcHullMA_inference(series, N=50):
SMA1 = moving_average(series, N)
SMA2 = moving_average(series, int(N/2))
res = (2 * SMA2 - SMA1)
return np.mean(res[-int(np.sqrt(N)):])
row["hull"] = last_close - calcHullMA_inference(f[asset]["all_close"][-260:], 240)
</code></pre>
<p>Other important note about creating lag feature, is the window size. I choose my windows with a Fibonacci sequence: fibo_list = [55, 210, 340, 890, 3750]. You can see that in notebook.</p>
<p>Last important explanation about my feature is using lag target. Firstly I tried to use the official calculation but then I just gave up and use this:</p>
<p><code>row["target_return"] = (last_close / f[asset]["all_close"][-16]) -1</code></p>
<p>and for a market indicator, I collected new "target_return" for every batch, and get the average of last available list.</p>
<p>Last important note for my work is models. I have 3 different LightGBM models that were trained for different market conditions. Up market, down market and relatively more stable market. Then I get the average of them.</p>
<p>I thought about how could I improve my models. First thing is: I could do parameter optimization, I almost used default parameters. Then I could add different kind of models, I thing my submission time was around 6 hours. lastly, I had actually more feature in my mind.</p>
<p>Well, thanks to everyone again.. See you in next competition.</p> |
G-Research Crypto Forecasting | calm down even if you got negative score. Let's shares your thoughts on this task | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: G-Research Crypto Forecasting
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>The market is almost always unpredictable, even very good model can not guarantee stable performance, since the pattern shown in the history may change in the future. I tested my models using 2*3months and got about roughly 0.05 on each(no leakage), but I got -0.0025 on the first update(the submitted model used all data). I feel so sad as many of you, but I think it is normal and I really enjoyed the competition.</p>
<p>I think the useful things for us would be sharing our thoughts this task, and learning from each other. Let me share my idea first. </p>
<ol>
<li><p>We are asked to predict r15-beta*m15, where beta is the slope of linear fitting of passed return, so if you think linear relation exist, the expected target should be the fitting intercept(very close to zero). So we are actually asked to predict the unexpected extra return of each asset based on passed linear fitting. <br>
Beta would be a feature that directly affect target. The beta is set to 'zero' if there is missing value in 3750 window, and we are actually asked to predict r15. So I have two types of models, one for zero beta(predic r15) and the other for non-zero beta(predict r15-betam15).</p></li>
<li><p>Other features I used are [asset return, portfolio return, asset return-portfolio return, asset return for moving average] in passed [1,2,3,5,10,15,20,30,40,50,60] minute.</p></li>
<li><p>I used 12*5 LGBM models ensemble for this competition, every time I added Gaussian noise to all features and target.</p></li>
</ol>
<p><a href="https://www.kaggle.com/julianmukaj" target="_blank">@julianmukaj</a> I did used lots of regularization including [L1, L2, max_depth, min_data_leaf, feature_fraction, bagging_fraction, num_leaves]. The 0.05 is LB and on updated SUP data, CV is above 0.06, training is 0.1-0.13.</p>
<p>For the adding noise part, I can not guarantee it is good for this task, but I know it works in some task with noisy input, just like the example I mentioned, and I think mathematical fomula make sense.</p>
<p><a href="https://www.kaggle.com/petersk20" target="_blank">@petersk20</a> you are right, that why I have totally 12*5 models, each models have data input with different Gaussian noise(both add N(0,sigma) and multiply N(1,0.03)). The score keeps increasing for first 40 models, and seems saturate after that. <br>
One thing I did not pay attention is that the noise I add is independent, however there should be some correlation between the features, so ideally you should adjust your input noise based on that. I was busy in some internship interview and do not have time do the fine tuning.</p>
<p>Ahh, I think you are right that at least I need to add the model without extra noise. All my models were trained on data with extra noise. But actually the noise intensity I add is small, all the models perform similar as the model without nosie.</p>
<p><a href="https://www.kaggle.com/julianmukaj" target="_blank">@julianmukaj</a> I agree that I definitely have some level of overfitting, but I dont think wit would such huge. In first test, I used LB data as test, use data before that as training, the CV is about 0.066 and test is 0.05; in second test, I used SUP data as test and the data before that as training, the CV is still about 0.06 and test around 0.05.</p>
<p>For the Gaussian noise, it is used to make more smooth prediction and deal with over fitting. Since tree based models can give discrete prediction. It works well when I did test on the 2*3 months test mentioned above.</p>
<p>If you still dont agree with the noise I added, think about this situation. Imagine the function y=sin(x)+N, where N is some large noise signal. We know the truth is y=sin(x), but the over fitting LGBM would give wiggles at every place. Basically speaking, the predicted value at some bin [x1,x2] would be mean(yi|xi in [x1,x2]) = mean(sin(xi)| xi in [x1,x2])+mean(N(xi)| xi in [x1,x2]). If bin is large, mean(N(xi)| xi in [x1,x2]) would be close to zero but model will under fit; if bin is small, mean(N(xi)| xi in [x1,x2]) will have large variation and cause wiggles, and it is overfitting. Add nosie to {xi} will make the data in one bin goes into nearby bins, and each model will have different mean(N(xi)| xi in [x1,x2]), the expected ensemble contribution of noise will decay as number of model increase.</p>
<p>I think there would be better way to add noise to deal with overfitting, but I think the idea itself is right, especially for noisy data. What do you think about this?</p>
<p>For the features part, what do you think would be better features? I am just curious about this.</p> |
G2Net Detecting Continuous Gravitational Waves | 10th place solution: CNN with pseudo labels | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: G2Net Detecting Continuous Gravitational Waves
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Even for models that only predict targets, normalizing with RobustScaler improved public lb.</p>
<p>After submitting 5fold average of single model with public lb=0.747, I proceeded with the experiment by adding pseudo-labels.<br>
So I do not know the exact effect of pseudo label alone.<br>
At least, pseudo labels is essential to get public lb=0.747->0.780.</p>
<p>Thank you to the hosts and the organizers for having such a challenging competition!</p>
<p>My solution is based on a CNN. I incorporated various elements, but the following three items were particularly important.</p>
<ul>
<li><strong>normalization using RobustScaler</strong></li>
<li><strong>multitask learning of target and frequency</strong></li>
<li><strong>pseudo labels</strong></li>
</ul>
<h1>Data</h1>
<p>I used PyFstat to generate data with the same range of timestamps, frequency, and amplitudes as the test data. The signal data was generated with a signal depth (sqrtSX/h0) in the range of [1, 50]. The number of data for each is as follows.</p>
<table>
<thead>
<tr>
<th>data type</th>
<th>number of data</th>
</tr>
</thead>
<tbody>
<tr>
<td>gap noise</td>
<td>3000</td>
</tr>
<tr>
<td>nonstationary noise</td>
<td>2983</td>
</tr>
<tr>
<td>signal</td>
<td>5400</td>
</tr>
</tbody>
</table>
<h1>Preprocess</h1>
<p>I created an image with shape (C,H,W) = (2, 360, 127) by performing the following processing on each of H1 and L1.</p>
<ul>
<li>Normalize the square of the amplitudes of H1 and L1 using <code>sklearn.preprocessing.RobustScaler</code>.<ul>
<li>Since the real noise in the test data has large outliers, I use <code>sklearn.preprocessing.RobustScaler</code>, which is resistant to outliers.</li></ul></li>
<li>Align H1 and L1 timestamps in the same way as <a href="https://www.kaggle.com/code/laeyoung/g2net-large-kernel-inference" target="_blank">G2NET large kernel inference</a>, resulting in an image with shape (C,H,W) = (2, 360, 5760).</li>
<li>Take a moving average to make the image size (2, 360, 5760) -> (2, 360, 127). </li>
<li>Clip the values of the channel with the larger maximum value.<ul>
<li>Because the test set contains data with too large a difference in maximum values between H1 and L1.</li></ul></li>
<li>Finally, I standardize the image with entire data mean and standard deviation.</li>
</ul>
<h1>Model</h1>
<p>MultiOutput model predicting target and frequency. </p>
<ul>
<li>Target loss is <code>nn.BCEWithLogitsLoss</code></li>
<li>Frequency//50 loss is <code>nn.CrossEntropyLoss</code> of 11 classes</li>
<li>Architecture is tf_efficientnet_b5_ap</li>
<li>Change the stride of the first conv layer of the model to (1,2) to scale up the image resolution</li>
</ul>
<pre><code>class CustomModel(nn.Module):
def __init__(self, pretrained=True):
super().__init__()
self.net = timm.create_model("tf_efficientnet_b5_ap",
pretrained=pretrained,
num_classes=0,
in_chans=2)
# decrease first conv's stride
modules_iter = iter(self.net.modules())
for module in modules_iter:
if isinstance(module, torch.nn.Conv2d) and tuple(module.stride) == (2, 2):
break
module.stride = (1, 2)
# Target
self.head1 = nn.Sequential(
nn.Linear(self.net.num_features, 1)
)
# Frequency//50: 40-500Hz // 50
freq_div_n = 50
self.head2 = nn.Sequential(
nn.Linear(self.net.num_features, (500 // freq_div_n) - (40 // freq_div_n) + 1)
)
def forward(self, x, labels=None):
feat = self.net(x)
y1 = self.head1(feat)
y2 = self.head2(feat)
return y1[:,0], y2
</code></pre>
<h1>Training</h1>
<ul>
<li>cv: target StratifiedKFold(n_splits=5)</li>
<li>optimizer: AdamW</li>
<li>scheduler: warmup 0-3epoch(lr=4e-6->4e-4) + Cosine Annealing 3-100epoch(lr=4e-4->4e-6)</li>
</ul>
<h1>Augmentation</h1>
<ul>
<li>Mixup<ul>
<li>mixup where data with target=1 is not mixed together</li>
<li>mixup with the overall mean value of the test set</li></ul></li>
<li>Horizontal and vertical lines of outliers that mimic the real data of the test set</li>
<li><a href="https://www.kaggle.com/competitions/g2net-gravitational-wave-detection/discussion/275335" target="_blank">Augmentation of 8th place in the last competition</a>(LIGO Swap, Swap with Other Negative)</li>
<li>GaussNoise</li>
<li>Vertical Shift</li>
<li>Torchaudio Masking(time masking, frequency masking)</li>
<li>Horizontal Flip, Vertical Flip</li>
</ul>
<h1>Pseudo Labels</h1>
<p>Pseudo labels using only the simulation data from the test set and pseudo labels excluding real data with amplitudes that take on outliers were effective. <br>
It was better to train with soft labels than with hard labels.</p>
<h1>TTA</h1>
<p>Horizontal Flip, Vertical Flip</p>
<h1>Ensemble</h1>
<p><a href="https://www.kaggle.com/c/ranzcr-clip-catheter-line-classification/discussion/205564" target="_blank">Rank averaging</a> of submit.csv files from different models.<br>
However, according to <a href="https://www.kaggle.com/poteman" target="_blank">@poteman</a>, Stacking seems to perform slightly better in private lb. (I heard about this after competition. poteman's submit is not used for final submission.)</p>
<h1>P.S.</h1>
<p><strong>I did all the above solutions by myself. I did not get help from my teammates.</strong><br>
Code: <a href="https://github.com/riron1206/kaggle-G2Net-Detecting-Continuous-Gravitational-Waves-10th-Place-Solution" target="_blank">https://github.com/riron1206/kaggle-G2Net-Detecting-Continuous-Gravitational-Waves-10th-Place-Solution</a></p> |
G2Net Detecting Continuous Gravitational Waves | 11th place solution OR Simpler is Better | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: G2Net Detecting Continuous Gravitational Waves
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><em>First, I would like to thank the Armed Forces of Ukraine, the Security Service of Ukraine, the Defence Intelligence of Ukraine, State Emergency Service of Ukraine for providing safety and security to participate in this great competition, completing this work, and helping science, technology, and business not stop and move forward.</em></p>
<p>Also thanks Kaggle team and competition organizers for this year G2Net Detecting Continuous Gravitational Waves event.</p>
<p>This year as in the previous one, we were working with simulated GW signals, but this year we had only STFTs of these signals, which makes this year's task a bit different</p>
<h1>Data</h1>
<h2>Data generation</h2>
<p>After the first experiments with only training data, I realized that it is nearly impossible to build a good deep-learning model on it and validate solutions using Roc-Auc metric. So I have decided to start generating it using <code>pyfstat</code> library.<br>
I have started from train set negative samples: I have used <code>*_timestamps_*</code> and <code>frequency_Hz</code> from the original train samples, as for other parameters, I have mostly taken them from one of the data generation Kernels:</p>
<pre><code>generation_kwargs = {
"sqrtSX": 0.5e-23,
"detectors": "H1,L1",
"Tsft": 1800,
"Band": 1 / 5.01,
"SFTWindowType": "tukey",
"SFTWindowBeta": 0.001,
}
</code></pre>
<p>I have generated 100 duplicates for each training sample and applied adversarial validation with my best (CNN-based) model and received 0.5 Roc-Auc, which indicated that I was on a right way <br>
Then I repeated the same with positive train samples, using next additional parameters</p>
<pre><code>random_param_dict = {
"F1": lambda: 10 ** stats.uniform(-12, 4).rvs(),
"cosi": lambda: stats.uniform(-1.0, 1.0).rvs(),
"psi": lambda: stats.uniform(-0.25 * np.pi, 0.25 * np.pi).rvs(),
"phi": lambda: stats.uniform(0, 2 * np.pi).rvs(),
"Alpha": lambda: stats.uniform(0, 3.14159).rvs(),
"Delta": lambda: stats.uniform(0, 3.14159).rvs(),
"h0": lambda h0_center_: h0_center_ / stats.uniform(5, 90).rvs(),
}
</code></pre>
<p>One more important step was to shift signal while generation, so it is not vertically centered. I have repeated adversarial validation and reached ~0.5 Roc-Auc again <br>
The next step was to repeat the same operation with the test set. But it was more tricky. The test set contains:</p>
<ul>
<li>Nonstationary noise</li>
<li>Instrumental artifacts<br>
I have decided to treat only <code>Nonstationary noise</code> problem. I have found nearly all nonstationary samples with simple std computation and several iterations of adversarial validation (train vs test sets)<br>
For stationary samples, I have repeated train set generation operation (10 positive and 10 negative duplicates)<br>
For nonstationary I have simply used Gaussian distribution to generate test sample duplicate</li>
</ul>
<pre><code>def gen_multivar(ref_sample):
locs = ref_sample.mean(axis=0)
stds = ref_sample.std(axis=0)
n_freqs = ref_sample.shape[0]
return np.stack([np.random.normal(loc=loc, scale=scale, size=n_freqs) for loc, scale in zip(locs, stds)], axis=-1)
def gen_multivar_complex(ref_sample):
return gen_multivar(ref_sample.real) + gen_multivar(ref_sample.imag) * 1j
</code></pre>
<p>Then I repeated adversarial validation with all generated data (~240K samples) and received ~0.5 Roc-Auc. So I was pretty much ready for modeling!</p>
<h2>Data preprocessing</h2>
<p>I have tried 2 ways for data preprocessing:</p>
<ol>
<li>Baseline with binning - described <a href="https://www.kaggle.com/code/junkoda/basic-spectrogram-image-classification" target="_blank">here</a>. 128 bins worked best for me</li>
<li>Laeyoung Data preprocessing with noise injection - described <a href="https://www.kaggle.com/code/laeyoung/g2net-large-kernel-inference" target="_blank">here</a>. Outperformed first option but only with huge amount of noise TTA on inference (64 noise TTA x 4 flips TTA)</li>
</ol>
<h1>Modelling</h1>
<h1>Classification</h1>
<p>I have mostly used simple CNN for spectrogram classification. As for backbones, I have tried several ones but the best was - <code>convnext_base_384_in22ft1k</code>.<br>
I have trained model in the vanila image classification setup - Adam + ReduceLROnPlateau. But augmentations were pretty important:</p>
<ul>
<li>HorizontalFlip</li>
<li>VerticalFlip</li>
<li>ChannelShuffle</li>
<li>ShiftScaleRotate (only by Y axis)</li>
<li>OR Mixup <br>
After that, I have tried to finetune model without most of augmentations (reducing LR 10 times)<br>
On inference I have just averaged 5 folds and used 4 flips TTA x 64 noise TTA. Inference in such a setup took several hours :( <br>
Such a model with <code>Laeyoung Data preprocessing</code> granted me 0.764 Public and 0.785 Private scores (Not selected)</li>
</ul>
<h2>Ensembling</h2>
<p>For final <code>classification</code> ensemble I have blended nearly 10 models with different preprocessing types, architectures and augmentation setups</p>
<h2>Segmentation</h2>
<p>I have also trained a bunch of segmentation models using generated data. I have tried 2 setups:</p>
<ul>
<li>Use original SNR from generation </li>
<li>Normalize each spec with MinMaxScaler and apply some random SNR distribution for training and constant coef for validation <br>
Segmentation was not a silver bullet BUT it could catch some hard samples, which were ignored by classification!</li>
</ul>
<p>So in order to use it in final ensemble - I have tried 2 approaches:</p>
<ul>
<li>Hard:</li>
</ul>
<pre><code>classification_probs[(masks > pixel_tresh).sum() > n_pixels] += prob_boost
</code></pre>
<p>I have used different <code>pixel_tresh</code> and <code>n_pixels</code> for different segmentation models but I have only used 1.0 as a <code>prob_boost</code> value. I have repeated such a <code>Probability Boosting</code> operation several times. Unfortunately, such an approach led to a huge overfit - Scored 0.781 Public and 0.777 Private LB</p>
<ul>
<li>Soft:</li>
</ul>
<pre><code>segmentation_predict = (masks > pixel_tresh).sum()
segmentation_predict[segmentation_predict < n_pixels] = -1
segmentation_predict = rankdata(segmentation_predict)
final_probs = segmentation_predict + rankdata(classification_probs)
</code></pre>
<p><br>
And my final best selected sub contained soft approach with big classification ensemble and 5 segmentation models (blended with each other with <code>Soft</code> approach). Scored 0.776 Public and 0.781 Private LB</p>
<h1>P.S.</h1>
<p>It was a long road to GM and I am very satisfied with this achievement. I would like to say <strong>Thanks</strong> to all my teammates from previous competitions, all my tutors, Kaggle Community, and, of course, Ukraine, who has provided me with high-quality higher education, a competitive environment, and all needed opportunities</p>
<h1>Glory to Ukraine!!!</h1> |
G2Net Detecting Continuous Gravitational Waves | 12th place solution - from public 281st to private 12th! | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: G2Net Detecting Continuous Gravitational Waves
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi, kagglers! I was really surprised that I was in 12th place. Even I thought there should be some shake-ups, but I didn't expect that I would be the only man who climbed the high wall.</p>
<p>My solution is simple:</p>
<ul>
<li>Generate 30k pure signals with sqrtSX=0.</li>
<li>Combine the pure signals with random noise backgrounds and make 2m combinations. The pure signals are flipped and stretched randomly.</li>
<li>Train a ConvNext-Small model with 5 epochs, AdamW, lr=3e-4 cosine annealing, and batch size=128.</li>
<li>Random vertical & horizontal shuffling, random flips, random virtual horizontal lines and beams are used for data augmentation.</li>
<li>Use 4-way flips as test-time augmentation.</li>
</ul>
<p>I also tried an ensemble with many models, but the final score is bad. A single convnext-small model achieves public lb 0.76 and private lb 0.78. In addition, a single convnext-tiny model also achieves public lb 0.757 and private lb 0.78. Very weird 🤔🤔🤔🤔</p>
<p>You can find my code on <a href="https://github.com/affjljoo3581/G2Net-Detecting-Continuous-Gravitational-Waves" target="_blank">my github repo</a>.</p>
<p>And I found that the overfitting is occurred when the model is trying hard to memorize weak signal patterns.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2160097%2Fae8958d991c706080761df6cda5b0d91%2F2023-01-04%20163042.png?generation=1672817532203703&alt=media" alt=""><br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2160097%2Ff83a8b0a3190128331e3dac923bed262%2F2023-01-04%20163058.png?generation=1672817544082915&alt=media" alt=""></p>
<p>When the training accuracy of the 0.01-0.02 group suddenly rise, all validation metrics become worse. It was really hard to control the overfitting, even with 2m data. As <a href="https://www.kaggle.com/kozistr" target="_blank">@kozistr</a> said <a href="https://www.kaggle.com/competitions/g2net-detecting-continuous-gravitational-waves/discussion/375927" target="_blank">in his post</a>, it looks like generating more weak signal samples is important.</p> |
G2Net Detecting Continuous Gravitational Waves | 13th Place Solution: plain machine learning. | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: G2Net Detecting Continuous Gravitational Waves
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>summary</h1>
<ul>
<li>Dataset</li>
<li>Normalization</li>
<li>PyFstat data generation / parameter range</li>
<li>Large Kernel and FFT</li>
<li>Train more with EMA</li>
<li>Etc</li>
</ul>
<h1>Dataset</h1>
<ul>
<li>test/: used for data augmentation during training (finally not applied)</li>
<li>train/ : validation data</li>
<li>Generate from PyFstat: data for training</li>
</ul>
<h1>Normalization</h1>
<pre><code>from scipy.stats import norm
def Fnormalize(X):
X /= X.sum(-2, keepdims=True)
return X
def Pnormalize(X):
n = np.prod(X.shape[-2:])
POS = min(int(n * 0.999), n - 10)
EXP = norm.ppf((POS + 1 - np.pi / 8) / (n - np.pi / 4 + 1))
scale = np.partition(X.flatten(), POS, -1)[POS]
X /= scale / EXP.astype(scale.dtype) ** 2
return X
def normalize(X):
X = (X[..., None].view(X.real.dtype) ** 2).sum(-1)
X = Fnormalize(X)
X = Pnormalize(X)
return X
</code></pre>
<p>This data normalization has a simple theoretical background.<br>
<code>Fnormalize</code> : Changes the complex input to the sum of the squares of the real and imaginary parts of the elements, transforming the input into a <code>chi2</code> distribution.<br>
<code>Pnormalize</code>: Scales so that the square of the <code>POS</code>th largest number of the normal distribution is equal to the <code>POS</code>th largest number of the data.<br>
It would be a good idea to interpret <code>Pnormalize</code> for the <code>chi2</code> distribution, but since this function only scales the input, I think you'll get similar performance.<br>
In the same model, the performance is improved by changing the normalization technique of the inference step.</p>
<table>
<thead>
<tr>
<th></th>
<th>Private</th>
<th>Public</th>
</tr>
</thead>
<tbody>
<tr>
<td>Pnormalize only</td>
<td>0.766</td>
<td>0.739</td>
</tr>
<tr>
<td>Fnormalize + Pnormalize</td>
<td>0.770</td>
<td>0.751</td>
</tr>
</tbody>
</table>
<h1>PyFstat data generation / parameter range</h1>
<p>Before implement the dataset sampling code, I read the PyFstat library code and tracked the parameter range. However, <code>psi</code> and <code>phi</code> were not found.<br>
Surprisingly, only 5 days before the end of the competition, I found out about the proper range <a href="https://www.kaggle.com/competitions/g2net-detecting-continuous-gravitational-waves/discussion/373973" target="_blank">here</a>, and the model performance improved.</p>
<p>Setting 1(Gen1):</p>
<pre><code>tstart: [630720013, 1861492413)
F0: [45, 600)
F1: 10 ^ truncnorm.isf(rng.uniform(), a=-100, b=3, loc=-15, scale=2)
Alpha: [0, 2pi)
Delta: [-pi/2, pi/2)
cosi: [-1, 1)
psi: [-1, 1)
phi: [-1, 1)
Tsft: 1800
SFTWindowType: "tukey"
SFTWindowBeta: 0.0001
</code></pre>
<p>Setting 2(Gen2):</p>
<pre><code>tstart: [630720013, 1861492413)
F0: [45, 600)
F1: 10 ^ truncnorm.isf(rng.uniform(), a=-100, b=3, loc=-15, scale=2)
Alpha: [0, 2pi)
Delta: [-pi/2, pi/2)
cosi: [-1, 1)
psi: [-pi/4, pi/4)
phi: [0, 2pi)
Tsft: 1800
SFTWindowType: "tukey"
SFTWindowBeta: 0.0001
</code></pre>
<table>
<thead>
<tr>
<th></th>
<th>Validation ROCAUC</th>
<th>Private</th>
<th>Public</th>
</tr>
</thead>
<tbody>
<tr>
<td>exp/v0.2-1229-1 Best valid (Gen1)</td>
<td>0.8569</td>
<td>0.761</td>
<td>0.747</td>
</tr>
<tr>
<td>exp/v0.2-1229-1 Last epoch (Gen1)</td>
<td>0.7278</td>
<td></td>
<td></td>
</tr>
<tr>
<td>exp/v0.2-1230-2 Best valid (Gen2)</td>
<td>0.8662812499999999</td>
<td>0.759</td>
<td>0.747</td>
</tr>
<tr>
<td>exp/v0.2-1230-2 Last epoch (Gen2)</td>
<td>0.8624</td>
<td>0.770</td>
<td>0.756</td>
</tr>
</tbody>
</table>
<h1>Large Kernel and FFT</h1>
<p>I implemented <a href="https://github.com/klae01/fft-conv-pytorch" target="_blank">fft convolution</a> to use large kernels in training. You can refer to the notebook that was released one week before the end of the competition. <a href="https://www.kaggle.com/code/assign/g2net-large-kernel-inference-fft-conv2d" target="_blank">link</a></p>
<h2>Weight Decay</h2>
<p>It has been observed that large kernels are prone to bias and are weak. I applied a strong weight decay of 0.001 to that layer.</p>
<table>
<thead>
<tr>
<th></th>
<th>Validation ROCAUC</th>
<th>Private</th>
<th>Public</th>
</tr>
</thead>
<tbody>
<tr>
<td>exp/v0.2-1229-1 weight decay: 1e-3 Best valid (Gen1)</td>
<td>0.8569</td>
<td>0.761</td>
<td>0.747</td>
</tr>
<tr>
<td>exp/v0.2-1230-1 weight decay: 1e-4 Best valid (Gen1)</td>
<td>0.8538749999999999</td>
<td>0.756</td>
<td>0.743</td>
</tr>
</tbody>
</table>
<h2>Dense stem</h2>
<p>Some performance improvement is achieved by changing the depth-wise convolution applied per L1/H1 detector channel of the model to a normal convolution.<br>
Compared to the best validation score, the gap is large, but the last epoch still performs better.</p>
<table>
<thead>
<tr>
<th>exp/v0.2-1231-1</th>
<th>Validation ROCAUC</th>
<th>Private</th>
<th>Public</th>
</tr>
</thead>
<tbody>
<tr>
<td>Best valid (Gen2)</td>
<td>0.8635312500000001</td>
<td>0.762</td>
<td>0.748</td>
</tr>
<tr>
<td>Last epoch (Gen2)</td>
<td>0.8532</td>
<td>0.771</td>
<td>0.760</td>
</tr>
</tbody>
</table>
<h1>Train more with EMA</h1>
<p>I trained with 2x epochs with the same settings, but the model diverged. However, as the model with the highest validation score, it showed an improved score than before.<br>
Training ended 3 hours before the end of the competition, and this is the last training.</p>
<table>
<thead>
<tr>
<th></th>
<th>Private</th>
<th>Public</th>
</tr>
</thead>
<tbody>
<tr>
<td>Gen 1</td>
<td>0.761</td>
<td>0.747</td>
</tr>
<tr>
<td>Gen 2</td>
<td>0.770</td>
<td>0.756</td>
</tr>
<tr>
<td>Gen 2 + Dense stem</td>
<td>0.771</td>
<td>0.760</td>
</tr>
<tr>
<td>Gen 2 + Dense stem + EMA</td>
<td>0.773</td>
<td>0.761</td>
</tr>
</tbody>
</table>
<h1>Etc</h1>
<ul>
<li>I didn't use the idea of putting horizontal/vertical lines in the PyFstat data.</li>
<li>To avoid overfitting all hypotheses to the train data, the data belonging to <code>train/</code> were only analyzed at the statistical level. Might be a dumb choice :)</li>
<li>I have applied the idea of creating gaps in continuous inputs and filling them with noise, but the validation loss at the beginning of training is greatly improved, but the training becomes very unstable, and the training/validation/submission scores are all lower. It's an idea that was ultimately abandoned.</li>
<li>PyFstat is slow because they uses disk. This is the bottleneck. Use <code>tmpfs</code>.</li>
<li>The model was trained with 256 batches of 64k samples reused 32 times each.</li>
<li>The time spent implementing the tool was too long compared to the time spent improving performance. For example, an implementation that multiprocesses a pipeline that reuses and discards data and guarantees the same order for seeds / fft convolution optimization / etc.</li>
<li>Surprisingly, SNR ranges from training [6, 10]. Better than [5,10], better performance than [6, 15]. In the training phase, a low SNR range causes the model to diverge, while a high SNR range makes the model less discerning when no signal is present.</li>
</ul>
<p>Note: This is a rough comparison of the functions that investigate the SNR mentioned above.</p>
<table>
<thead>
<tr>
<th>PyFstat parameter</th>
<th></th>
<th></th>
<th>SNR estimation</th>
</tr>
</thead>
<tbody>
<tr>
<td>h0</td>
<td>sqrtSX</td>
<td>cosi</td>
<td></td>
</tr>
<tr>
<td>1</td>
<td>[10,100]</td>
<td>0</td>
<td>[9.2,101]</td>
</tr>
<tr>
<td>1</td>
<td>[10,100]</td>
<td>1 or -1</td>
<td>[2.3,19]</td>
</tr>
</tbody>
</table> |
G2Net Detecting Continuous Gravitational Waves | 1st place solution: Summing the power with GPU | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: G2Net Detecting Continuous Gravitational Waves
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks for the feedback. I fixed the github repository to public. I hope it is accessible now (I'll continue to add more explanations).<br>
I am impressed by the shortness of your code. I'll learn from you, too.</p>
<p>I wrote what the weight is more in detail in the notebook<br>
<a href="https://www.kaggle.com/code/junkoda/optimal-weights-for-signal-to-noise-ratio" target="_blank">https://www.kaggle.com/code/junkoda/optimal-weights-for-signal-to-noise-ratio</a><br>
because I got similar questions, too. I wanted to brush up a little more, but looks like you already noticed.</p>
<p>Thanks for the question. I thought it might not clear, too. I mean slope and intercept for a linear equation y = a x + b.<br>
f(t) = f0 + fdot t<br>
f0 is the 360 frequencies and fdot is the 241 patterns of <code>slope</code></p>
<p>I guess scientists usually use iterative approach, Markov chain Monte Carlo, like the 6th place simulated annealing, but I don't know.</p>
<ol>
<li><p>The "simulation" is the standard PyFstat data generation. I created without noise, then the amplitude squared is the total power: sum(|h|**2, axis=frequency)</p></li>
<li><p>What I wrote was obviously too short and I was writing the detail with LaTex today, but it was not an appropriate day after days of excitement and caffeine overdose. I'll add it here and let you know after a few days.</p></li>
</ol>
<p>Your performance was amazing, too! Looks like working 24 hrs a day, updating the score at 3am, 4 am. I don't think I'll ever see a team getting the first in the last 3 minutes with such a large change in the score again.</p>
<p>Thanks! I'm looking forward to reading you solution, too. Mine is brute force, but do you use something like Markov chain Monte Carlo? Amazing first medal!</p>
<p>For an unsuccessful wave model, I tried to add 48 complex data each day using a template, create 3D block of frequency x frequency time derivative x time, and do 3D CNN, but I need orders of magnitude more templates to add complex wave.</p>
<p>Take away…, I am much happier with a practical solution than an incomplete ideal solution that didn't meet the deadline; the latter is my situation with the GPS Google Smartphone competition last year.</p>
<p>I don't think I have anything worthwhile… For example, I made tiny ResNet-like models reading last year solutions, but those are not better than simple 2D image models even for simple linear frequency growth. There is a wonderful 1D wave detection model by the previous 2nd place winner:</p>
<p><a href="https://github.com/analokmaus/kaggle-g2net-public" target="_blank">https://github.com/analokmaus/kaggle-g2net-public</a></p>
<p>I thank Kaggle and the organizers for hosting this gravitational wave competition. I enjoyed the previous binary back hole merger competition, as well, and was impressed a lot by the gold-medal solutions. I thought large pretrained image models would be the strongest anyway, but they outperformed with 1-dimensional convolutional neural networks. I joined this competition so that I could build such deep neural network models detecting the wave, not the power, … but failed.</p>
<p>The largest difference between the two competitions is that the Earth rotates during 120 days and it imprints complicated frequency pattern into the frequency. Adding wave is very delicate, requires very accurate phase patterns, and I was not able to add up the complex Fourier modes effectively within reasonable computational resources. After more than one month without any progress, I thought I should get a silver medal even with an unsatisfactory approach, give up adding the wave and add the power.</p>
<h1>Solution</h1>
<ul>
<li>Sum power (absolute-value squared) along various signal patterns</li>
<li>No machine learning</li>
<li>No use of external data or leakage</li>
</ul>
<h2>Power summation</h2>
<ol>
<li>Extract signal frequency and amplitude [total power P(t)] from the simulations</li>
<li>Subtract Doppler shift frequency from the data frequency for 4000 signal patterns </li>
<li>Weight the data proportional to the signal amplitude pattern; this is the optimal linear weight w(t)</li>
<li>Sum the weighted power along lines: 360 frequencies (intercept) × 241 slops in [-120, 120] (frequency bin / 120 days)</li>
<li>Take the maximum</li>
</ol>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F954117%2F1bb4eb2f23c068c7c2998fc8a6e20ef8%2Fpower_sum.png?generation=1672802594937302&alt=media" alt=""></p>
<p>The values are highly skewed from the typical range [-1, 1] because these are maxima of 4000 templates × 360 frequencies × 241 slops.</p>
<p>This took 5 days using GPU RTX 3090. Full range of slope is [-360, 360] but [-120, 120] already took long enough.</p>
<h2>Real noise normalization</h2>
<ol>
<li>Normalize by the noise rms at each time h -> h / sigma(t)</li>
<li>Remove single-frequency noise by masking anomalously large frequency bin</li>
<li>Normalize the frequency dependence by the remaining rms sigma(f)</li>
</ol>
<p>I know the noise is not always written as a product of time dependence and frequency dependence, but I did not have time for better treatment. Some false positives are remaining in my prediction.</p>
<h2>Follow up summation with sinc kernel</h2>
<p>The signal spread among frequency bins with sinc function (assuming the window function for the short-time Fourier transform is almost top hat). I use a sinc kernel with width 8 and stride 1/8 frequency bin to collect the signal. This is the optimal linear weighting in the frequency direction. I recompute the power sum with this kernel around the largest-power line in the first step for a subsample of 400 templates. This gives a surprisingly large boost to the public score 0.825 -> 0.848</p>
<p>Finally, I apply a sigmoid to the standardized power sum and submit, which is same as just submitting the power sum. I thought the prediction value must also depend on the noise level; if the signal is undetected, there should be larger possibility to be positive for larger noise because more data are undetected. I modeled this effect but could not improve the score. </p>
<p>PS:<br>
More about the weight<br>
<a href="https://www.kaggle.com/code/junkoda/optimal-weights-for-signal-to-noise-ratio" target="_blank">https://www.kaggle.com/code/junkoda/optimal-weights-for-signal-to-noise-ratio</a></p>
<p>Code<br>
<a href="https://github.com/junkoda/kaggle_g2net2_solution" target="_blank">https://github.com/junkoda/kaggle_g2net2_solution</a></p>
<p>Thanks. It must be a disappointment for people who want to learn machine learning. I hope there is something useful for signal processing.</p> |
G2Net Detecting Continuous Gravitational Waves | 20th place solution (how I spent lots of time on things that didn't work) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: G2Net Detecting Continuous Gravitational Waves
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to organizers and making a fun physics-based competition.</p>
<p>I'm very impressed by the creativity (and simplicity) of the solutions (so far) posted from <a href="https://www.kaggle.com/junkoda" target="_blank">@junkoda</a> and <a href="https://www.kaggle.com/shunrcn" target="_blank">@shunrcn</a>, I'm sure there are more to come.</p>
<p>Also thanks to the kaggle community for being such great competitors.</p>
<h1>Solution:</h1>
<h2>Train</h2>
<ul>
<li>pyfstat to generate 20k signals with test-data-based gaps</li>
<li>Same portion of real vs. fake data as in test set (fake being roughly constant standard deviation)</li>
<li>All real type data generated by sampling from test data (e.g.start times, durations, standard deviations vs. time)</li>
<li>Gaussian random complex noise added to complex signal, as well as direct real noise to absolute magnitude.</li>
<li>Ignore h0, just add signal to noise in normalized way based upon total power in signal vs. noise along signal. Signal power went from just barely human visible at 360x360 to 10x lower.</li>
<li>About 200k training samples, Adam, Cosine scheduler, batch=32*4</li>
</ul>
<h2>Train and Test</h2>
<ul>
<li>Fill-out time data to 5760, binning data to be uniformly spaced</li>
<li>Cleaned data by removing all line or other spot noise from test data (quite effectively and fast)</li>
<li>Fill-in gaps of all data with surrounding level of noise</li>
<li>Compute absolute value as well as H1 times complex conjugate of L1 (equiv of cross-correction in FFT space)</li>
<li>Reduced by np.mean (like avgpool) down to 360x360 to help reduce noise</li>
<li>Normalize</li>
</ul>
<h2>Model</h2>
<ul>
<li>tf_efficientnet_b5_ns, 20-30 epochs, 5 folds</li>
<li>No Aug</li>
<li>Mixup (was required to avoid overfitting, is that what others saw?)</li>
<li>TTA of various seeds for filling-in the noise between gaps (worried model would overfit on clusters of noise)</li>
<li>Blended best single model, 15 5-fold TTAs of that best, large kernel</li>
<li>Nicely, CV score matched public very well.</li>
</ul>
<h1>Things I wrongly disregarded as not helpful</h1>
<ul>
<li>Normalizing per time point. I disregarded it too quickly, thinking the signal would vary in strength and that would make things harder. I was simply wrong to think that. Thanks <a href="https://www.kaggle.com/ren4yu" target="_blank">@ren4yu</a> </li>
</ul>
<h1>Things that failed or didn't help:</h1>
<ul>
<li>Treating as regression on power fraction (didn't help)</li>
<li>replknet large kernel model (didn't help)</li>
<li>Larger image sizes, 360x720 and 360x1440 (didn't help)</li>
<li>Denoising VAE with noise+signal given, signal as goal (failed totally to learn, even at small resolutions)</li>
<li>Stable diffusion (VAE they trained fails totally on our noise level)</li>
<li>resnet18 (failed to work at all for me)</li>
<li>mixnet_m (does ok)</li>
<li>mixnet_l (does ok)</li>
<li>convmixer_768_32 (does ok, but slow)</li>
<li>vit_base_patch16_224 and vit_relpos_small_patch16_224 (though ViT would pick up on sequence of patches, but failed totally on me, probably bug in usage)</li>
<li>large_kernel (<a href="https://www.kaggle.com/code/laeyoung/g2net-large-kernel-inference/data" target="_blank">https://www.kaggle.com/code/laeyoung/g2net-large-kernel-inference/data</a>) but dropping avgpool since not needed. Tried various kernel sizes, nothing really helped. Ok model but weaker than efficientnet</li>
<li>Training on fake and real data separately (only did bit worse)</li>
<li>Adding noise as Aug (didn't help)</li>
<li>BiLSTM + Conv approach like done for BH-BH case and other competitions (isft -> sft is not perfect, and even matching pyfstat sft, the human-visible signals get easily lost by the isft process, so time series approach would likely fail, so gave up)</li>
<li>Downloading real data to find segments that match in real test data, as <a href="https://www.kaggle.com/vslaykovsky" target="_blank">@vslaykovsky</a> said. I did that, but quickly realized the data did not match for gaps etc. So disregarded it.</li>
<li>Use pyfstat signal detector itself, both MCMC and grid search. I had to hack the code to convert hd5f -> sft. Does very poorly even on easy signals when range of search is even a tiny bigger than actual signal. So failed to work. </li>
<li>Look at test set overlap. Form bounding boxes for every test data set, and find matches. There are about 1000 matches, with varying overlaps. I used those to remove the noise. Clean-up standard deviation and mean at interfaces left over. But didn't help, probably because subtractions made data less clean.</li>
</ul>
<h1>What I don't understand:</h1>
<p>1) I always got boost from blending with good amount of top kernel with 0.761 that was partially based upon large kernel. I couldn't get independently high enough. My blend used half my stuff and half of that kernel in the end that did best on private. The large kernel model, part of that 0.761 score, seems to be doing better on fake data for weaker signals. But I never could reproduce that with own large kernel or replknet models.<br>
2) Why did VAE fail so bad? I never got out of AUC~0.5 domain.</p> |
G2Net Detecting Continuous Gravitational Waves | 22th-place solution : simulated CW signals & augmentations | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: G2Net Detecting Continuous Gravitational Waves
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I second this! As you said, I also found that the model can't learn anything from high SNR with cosi=0 cases. I should have tried that.<br>
Thanks for sharing :)</p>
<p>Hello everyone!</p>
<p>First, thanks to EGO for hosting an exciting competition! Also, congratulations to all the winners!</p>
<h2>Data</h2>
<h3>Pre-Processing</h3>
<p>In my experiment, <a href="https://www.kaggle.com/code/laeyoung/g2net-large-kernel-inference" target="_blank">preprocessing</a> (<code>normalize</code> function) works better than the power spectrogram. It improves the score by about +0.02 on CV/LB. After normalizing the signal, take a mean over the time axis. The final shape is (360, 360).</p>
<h3>Simulation</h3>
<p>Generating samples is the most crucial part of boosting the score. I can get 0.761 on the LB with a single model.</p>
<p>In short, signal depth (<code>sqrtSX / h0</code>) takes a huge impact. I generated 100K samples (50K positives, 50K negatives) and uniformly sampled the signal depth between 10 and 100. <code>cosi</code> parameter is uniformly sampled (-1, 1).</p>
<table>
<thead>
<tr>
<th>signal depth</th>
<th>LB score</th>
</tr>
</thead>
<tbody>
<tr>
<td>10 ~ 50</td>
<td>0.73x ~ 0.74x</td>
</tr>
<tr>
<td>10 ~ 80</td>
<td>0.75x</td>
</tr>
<tr>
<td>10 ~ 100</td>
<td>0.761</td>
</tr>
</tbody>
</table>
<h3>Augmentations</h3>
<p>Also, I've worked on the augmentations for much time. Here's a list.</p>
<ol>
<li>v/hflip</li>
<li>shuffle channel</li>
<li>shift on freq-axis </li>
<li>denoise a signal (subtract corresponding noise from the signal)</li>
<li>add noises<ul>
<li>Guassian N(0, 1e-2)</li>
<li>mixed (add or concatenate) with another (stationary) noise(s)</li></ul></li>
<li>add vertical line artifact(s).</li>
<li>SpecAugment</li>
<li>mixup (alpha 5.0)<ul>
<li>perform <code>or</code> mixup</li></ul></li>
</ol>
<h2>Model</h2>
<p>First, I tried to search for the backbones (effnet, nfnet, resnest, convnext, vit-based) and found <code>convnext</code> works best on CV & LB score. After selecting a baseline backbone, I experimented with customizing a stem layer (e.g. large kernel & pool sizes, multiple convolutions stem with various kernel sizes) to detect the long-lasting signal effectively, but they didn't affect the performance positively.</p>
<h2>Ensemble</h2>
<p>Most of the models used at the ensemble are <code>convnext-xlarge</code> but each model trained with different variances (e.g. augmentations, simulated samples, …) and <code>eca-nfnet-l2</code>, <code>efficientnetv2-xl</code> for one model. Every model trained on various datasets and LB score seems reliable, so I adjusted the ensemble weights by LB score.</p>
<p>I selected the two best LB submissions (LB 0.768 PB 0.771). And the best PB that I didn't select is 0.778 (LB 0.766) (mixing all my experiments).</p>
<h2>Works</h2>
<ul>
<li><code>convnext</code> family backbone</li>
<li>signal depth 10 ~ 100</li>
<li>hard augmentation</li>
<li>pair stratified k fold<ul>
<li>8 folds</li>
<li>stratified on the target</li>
<li><code>pair</code> means the pair (corresponding noise & signal) must be in the same fold.</li></ul></li>
<li>pseudo label (smooth label)</li>
<li>segmentation (but hard to converge on my experiment)</li>
<li>TTA</li>
</ul>
<h2>Not Works</h2>
<ul>
<li>segmentation with classification head (0.6 * bce + 0.4 * dice)<ul>
<li>Actually, seg with cls works slightly better than only cls, but hard to train without loss divergence. So, I just did only cls.</li></ul></li>
<li><code>cosi == 0</code><ul>
<li><code>cosi</code> is also a critical parameter to determine an SNR. I generated more samples where <code>cosi</code> is 0, but there's a score drop.</li></ul></li>
<li>augmentations (not worked)<ul>
<li>swap with random negatives (proposed at the past competition)</li>
<li>random resized crop</li></ul></li>
<li>Customize a stem layer with large kernel & pool sizes.</li>
</ul>
<p>I hope this could help you :)</p>
<p>Happy new year!</p>
<p>First of all, huge congrats to win the competition! I learned from your solution a lot : )</p>
<p>I also experimented with different depth ranges (10 ~ 50, 10 ~ 80, 10 ~ 100), and there's a performance gap</p>
<ul>
<li>10 ~ 50 : LB 0.73x ~ 0.74x</li>
<li>10 ~ 80 : LB 0.75x</li>
<li>10 ~ 100 : LB 0.76x</li>
</ul>
<p>I also thought that a small depth is enough and the model barely learned large-depth cases. However, after checking the AUC score by depth & GradCam, there's something that model catches and I guess train & test sets have large-depth cases.</p>
<p>Here's the score by the depth! (maybe slightly different, because just from my memory)</p>
<table>
<thead>
<tr>
<th>depth</th>
<th>score</th>
</tr>
</thead>
<tbody>
<tr>
<td>10 ~ 20</td>
<td>1.0</td>
</tr>
<tr>
<td>20 ~ 30</td>
<td>0.95</td>
</tr>
<tr>
<td>30 ~ 40</td>
<td>0.9</td>
</tr>
<tr>
<td>40 ~ 50</td>
<td>0.8</td>
</tr>
<tr>
<td>50 ~ 60</td>
<td>0.72</td>
</tr>
<tr>
<td>60 ~ 70</td>
<td>0.6</td>
</tr>
<tr>
<td>70 ~ 80</td>
<td>0.55</td>
</tr>
<tr>
<td>80 ~ 90</td>
<td>0.53</td>
</tr>
<tr>
<td>90 ~ 100</td>
<td>0.52</td>
</tr>
</tbody>
</table> |
G2Net Detecting Continuous Gravitational Waves | 2nd Place Solution: GPU-Accelerated Random Search | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: G2Net Detecting Continuous Gravitational Waves
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I published a source code: <a href="https://github.com/knshnb/kaggle-g2net2-2nd-place" target="_blank">https://github.com/knshnb/kaggle-g2net2-2nd-place</a></p>
<p>Thanks to the host and Kaggle staff for holding the competition and congratulations to the winners! I also appreciate my teammates ( <a href="https://www.kaggle.com/charmq" target="_blank">@charmq</a> and <a href="https://www.kaggle.com/yoichi7yamakawa" target="_blank">@yoichi7yamakawa</a>) a lot.</p>
<p>Eight hours before the contest ended, we realized that real data in close frequency range might have exactly the same noise, and by actually identifying some of them, we boosted the performance to 0.855/0.849 (1st in public LB!). In this post, we focus on our main solution without this leak magic, which could still win 2nd place (0.835/0.826).</p>
<h2>Basic Algorithm</h2>
<p>After struggling with training neural network models, in the last two weeks we found that a very simple solution could work: a random search of signals. Using the velocity for each timestamp computed by PyFstat, the shapes of waves are determined by four parameters: f0, f1, alpha, and delta. We searched the combination of these parameters that maximizes the mean powers (=square of absolute values) of the corresponding part in each data. The essential part of our algorithm is simple as follows (NumPy-like pseudocode):</p>
<pre><code>stft_sq: (, n_timestamp)
frequency_Hz: ()
velocity: (, n_timestamp)
_ (n_random_search):
f0, f1, alpha, delta = random_sample_params()
signal = calc_signal_shape(f0, f1, alpha, delta, velocity)
frequency_idx = np.((signal - frequency_Hz[]) / (frequency_Hz[] - frequency_Hz[]))
signal_part = stft_sq[frequency_idx, np.arange(n_timestamp)]
score = np.sqrt(signal_part.mean())
</code></pre>
<p>As a prediction, we simply outputted the mean score of the two detectors (L1 and H1) for each data.</p>
<p>We implemented batch-wise execution of this algorithm using <a href="https://github.com/cupy/cupy" target="_blank">Cupy</a> to accelerate on GPU. It enabled the search of 3276800 points in around 20 seconds per data on NVIDIA V100. Therefore, it took around 3 GPU hours and 2 GPU days for the execution of all train data and test data, respectively.</p>
<h2>Details</h2>
<p>We tried several frequency widths around signals (we take only the nearest one in the above pseudocode) and weighting methods. The best one was the nearest two and linear interpolation by the differences from signal frequencies.</p>
<p>When calculating signal shapes, we fixed <code>tref</code> for the whole data to a certain value because it can be covered by changing f0 and f1.</p>
<p>Choosing the right parameter distribution was important. From the analysis of train data, we realized that data with higher scores than a certain threshold (around 1.541) were almost surely positive. The distribution of found parameters in test data with high scores are as follows (the right figure is f0 scaled by each data's frequency range):<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F9088007%2Fa7a7e2c88250b1bd0577259da7a2c808%2Fparameter-distribution.png?generation=1673022472671396&alt=media"><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F9088007%2F8d0831bfed267e4e608344eca4fd6998%2Ff0_ratio.png?generation=1673022226325898&alt=media"><br>
From this, we decided to sample each parameter from the following distributions, which enhanced the performance a lot:</p>
<pre><code>alpha: Uniform([0, 2 \pi])
delta: arcsine(Uniform([-1, 1]))
f0: Beta(2, 2) between frequency range 20% extended to both sides
f1: 1/3 are from -2 * 10^(Uniform([-11, -9])), 2/3 are from 2 * 10^(Uniform([-11, -8]))
</code></pre>
<p>Around 1/5 of the test data included real noise with nonstationarity and peak in certain frequencies, etc. For these data, we performed time-wise normalization after a simple rule-based frequency mask like below:</p>
<pre><code> ():
freq_std = np.std(stft_sq, axis=)
error_freq = (freq_std > np.median(freq_std) * median_coeff) & (freq_std > np.percentile(freq_std, percent))
stft_sq[error_freq] = stft_sq[~error_freq].mean(axis=)
stft_sq
</code></pre>
<p>It seems most of the strong noises are removed by this preprocessing (an example of <code>id:56b090eaf</code> is below), but we did not have enough time to put much effort into this, so there might be some room for improvement.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F9088007%2Fe6e2af680a8f328ac8581036a8483af3%2Fdelete_freq.png?generation=1673021886640173&alt=media"></p>
<h2>Validation</h2>
<p>Since the number of training data was very limited, we calculated the average AUC of 5 different random seeds. It correlated with the public LB to some extent. The best mean train AUC was 0.902940, which seems to overfit a little, still. (Did not have enough time after identifying parameter distribution…)</p>
<p>For real data, we generated a validation set by adding signals to low-score samples, but it did not correlate well with LB, so we focused more on LB scores.</p>
<h2>Things that might be improved</h2>
<ul>
<li>More sophisticated preprocessing of real data</li>
<li>2-stage search</li>
<li>Rule-based or machine learning postprocessing of the search results</li>
<li>Consider amplitude differences by timestamps</li>
</ul>
<p>Thanks! That's three-dimensional velocity vectors for different timestamps, which you can get by using PyFstat (appear as <code>velocities</code> in the <a href="https://github.com/PyFstat/PyFstat/blob/master/examples/tutorials/1_generating_signals.ipynb" target="_blank">tutorial</a>).</p> |
G2Net Detecting Continuous Gravitational Waves | 3rd place solution: How far can deep learning go? | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: G2Net Detecting Continuous Gravitational Waves
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you for the comment! This notebook shows how we denoised the spectrogram.<br>
<a href="https://www.kaggle.com/code/iafoss/creative-denoising-part-of-top-3-solution/notebook" target="_blank">https://www.kaggle.com/code/iafoss/creative-denoising-part-of-top-3-solution/notebook</a></p>
<p>First of all, I would like to express deep gratitude to the competition organisers and the Kaggle team. The competition covers a wide range of knowledge, including data generation, deep learning, and classical methods, making it one of the most challenging and interesting competitions in my own Kaggle history. Without my wonderful teammates this ranking would not have been possible and it was indeed a pleasure for me to work with them.<br>
Congratulations to all the winners (and a special big congratulations to the new GM and my amazing teammate Gleb)! It was very interesting to read about various methods used in other teams' solutions.</p>
<p>Our approach is comprised of an ensemble of various <strong>deep learning</strong> models and <strong>"creative denoising"</strong>, which is a denoising technique that exploits the frequency overlap of real-world noise.</p>
<h2>Data generation</h2>
<h3>Fixed dataset approach</h3>
<p>We generated 40k images, including stationary noise, non-stationary noise, glitches (horizontal instrument lines) using pyfstat.</p>
<h3>Infinite training approach</h3>
<p>As we look into the data, we found that background noise follows a chi square distribution (the real and imaginary parts follow independent gaussian distributions). We therefore considered a fast online generation of noise with almost identical statistics (mean, variance) to each test data (<a href="https://www.kaggle.com/code/analokamus/fast-online-noise-generation/notebook?scriptVersionId=115544433" target="_blank">This notebook</a> shows how we generated noise). <br>
We generated 32k to 128k signal templates using pyfstat with sqrtSX=0, and injected signals into the generated noise. Also, since signals has frequency dependency (<strong>Figure 1</strong>), during training phase we: </p>
<ol>
<li>Sample a test image</li>
<li>Generate noise based on its statistics</li>
<li>Sample a signal generated at the same frequency from signal banks and inject</li>
</ol>
<p>In this way, we were able to train our models with almost infinite background noise patterns. This prevented the model from overfitting to the background noise, and improved the model performance significantly.</p>
<p><strong>Figure 1.</strong><br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1973217%2Fc82503ad6f32fe8bc11d7981e9f3d350%2FScreen%20Shot%202022-12-12%20at%202.17.10%20PM.png?generation=1672906996452633&alt=media"></p>
<h2>Validation stratedy</h2>
<p>We generated a validation set using the infinite training approach described above, with a one-to-one correspondence to the test set. Positive ratio was .66, and the injected signals has random parameters with signal depth ranged from 25 to 50. This validation set had good correlation with public LB. </p>
<h2>Preprocessing</h2>
<h3>Temporal synchronization and resize</h3>
<p>We synchronized the spectrogram from two detectors so that the model can catch the signal much easier, with no need to look around. Temporally averaged bins (128 - 720 bins) were used as the input image.</p>
<h3>Spectrogram normalization</h3>
<h4>Global normalization</h4>
<p>( <a href="https://www.kaggle.com/chris62" target="_blank">@chris62</a> )<br>
We normalized the entire input image by subtracting mean and dividing by the std dev</p>
<h4>Column-wise sqrt normalization</h4>
<p>( <a href="https://www.kaggle.com/iafoss" target="_blank">@iafoss</a> <a href="https://www.kaggle.com/analokamus" target="_blank">@analokamus</a> )<br>
Since background noise follows chi square distribution, we first calculated sqrt of the input image so that the output should follow normal distribution, and then subtracted mean / divided by std in each column to mitigate the impact of nonstationary noise.</p>
<h2>Architectures and training</h2>
<h3>Spectrogram image classifier</h3>
<p>We used CNNs with the following modification:</p>
<ul>
<li>Denoising layer(s) in the beginning with large kernels (3x31, 5x15, 5x31, 7x65)</li>
<li>AE denoising based pretraining for large kernel layers ( <a href="https://www.kaggle.com/Iafoss" target="_blank">@Iafoss</a> )</li>
<li>Denoising AUX with a light Segformer style decoder and MSE loss for noise-free signal ( <a href="https://www.kaggle.com/Iafoss" target="_blank">@Iafoss</a> )</li>
<li>Add a UNet segmentation decoder to extract signal and use dice loss as aux loss ( <a href="https://www.kaggle.com/analokamus" target="_blank">@analokamus</a> )</li>
<li>Signal depth scheduling: start with 10-20 depth and finish with 25-50 depth ( <a href="https://www.kaggle.com/Iafoss" target="_blank">@Iafoss</a> <a href="https://www.kaggle.com/analokamus" target="_blank">@analokamus</a> )</li>
<li>Add frequency to the output head ( <a href="https://www.kaggle.com/chris62" target="_blank">@chris62</a> )</li>
<li>Incorporate frequency information and statistics based features ( <a href="https://www.kaggle.com/drhb" target="_blank">@drhb</a> )</li>
</ul>
<p>Various backbones including EfficientNetB7, ResNeXt50, Inception v4, Xception65, Convnext base and ViT were used. </p>
<table>
<thead>
<tr>
<th>Model name</th>
<th>Public LB</th>
<th>Private LB</th>
<th>Author</th>
</tr>
</thead>
<tbody>
<tr>
<td>VIT FREQ</td>
<td><strong>0.779</strong></td>
<td><strong>0.793</strong></td>
<td><a href="https://www.kaggle.com/drhb" target="_blank">@drhb</a></td>
</tr>
</tbody>
</table>
<pre><code>- first 2 convolution layers from inception -> image_features
- [frequency, stat based features ] -> embedding
- combining [image_features, embedding]
- applying this to `vit_large_patch16_224_in21k`
- trained for 30 epoch
- lr: 1e-4, cosine annealing
- augmentation: flip, time/freq masking
</code></pre>
<table>
<thead>
<tr>
<th>Model name</th>
<th>Public LB</th>
<th>Private LB</th>
<th>Author</th>
</tr>
</thead>
<tbody>
<tr>
<td>Model03</td>
<td>0.774</td>
<td>0.792</td>
<td><a href="https://www.kaggle.com/analokamus" target="_blank">@analokamus</a></td>
</tr>
</tbody>
</table>
<pre><code>- infinite training with 80k signal templates
- input image size = 360 x 360
- 3x31 denoising layer
- UNet EfficientNet-b7
- loss = BCE loss x 0.7 + dice loss x 0.3
- cosine annealing lr for 40 epochs
- H/V flip, time/freq masking, random amplifier
</code></pre>
<h3>Matched filter-like classifier</h3>
<p>( <a href="https://www.kaggle.com/analokmaus" target="_blank">@analokmaus</a> )<br>
I randomly sampled 4096 or 8192 signals from the signal bank, and used them as convolution kernels. <br>
The output image with size of 360 x {num filter} is then passed to a simple CNN classification head.</p>
<table>
<thead>
<tr>
<th>Model name</th>
<th>Public LB</th>
<th>Private LB</th>
<th>Author</th>
</tr>
</thead>
<tbody>
<tr>
<td>Model03</td>
<td>0.768</td>
<td>0.782</td>
<td><a href="https://www.kaggle.com/analokamus" target="_blank">@analokamus</a></td>
</tr>
</tbody>
</table>
<pre><code>- infinite training with 80k signal templates
- input image size = 360 x 360
- 4096 x 135 x 360 filters
- EfficientNet-b0 head
- BCE loss
- cosine annealing lr for 40 epochs
- H/V flip, time/freq masking, random amplifier
</code></pre>
<h3>Megaconv</h3>
<p>( <a href="https://www.kaggle.com/bakeryproducts" target="_blank">@bakeryproducts</a> ) WIP</p>
<h2>Ensemble</h2>
<h3>Hierarchical stacking</h3>
<p>( <a href="https://www.kaggle.com/analokmaus" target="_blank">@analokmaus</a> <a href="https://www.kaggle.com/drhb" target="_blank">@drhb</a>)<br>
To combine tons of predictions, we used the following approach: </p>
<ol>
<li>Separate predictions into stationary and nonstationary, this is because we observed different correlation patterns between predictions in noise types.</li>
<li>Categorize predictions into some groups based on the similarity of their training settings and fit a ridge regression model (large alpha regularization, positive coefficients only) to maximize AUC of the validation set inside each group (first stage stacking).</li>
<li>Collect results from first stage stacking (shaped {num sample} x {num group}), and fit another ridge regression (large alpha regularization, positive coefficients only) to maximize AUC of the validation set. </li>
<li>Do 2-3 for both noise types defined in 1</li>
</ol>
<p>The purpose of this two-stage stacking is to deal with multicollinearity and prevent the coefficients from falling to zero. We used two different ensemble profiles, which resulted in public/private score of 0.784/0.780 and 0.783/0.798 respectively.</p>
<h3>Bayesian-like approach</h3>
<p>( <a href="https://www.kaggle.com/iafoss" target="_blank">@iafoss</a> )<br>
Consideration of 2048 random 4-fold splits, fitting 8192 linear models with a weak regularization, evaluation of weights of each model as an average over the final weight distribution.<br>
This approch resulted in public/private score of 0.784/0.801.</p>
<p>Our final submission is the rank average of the three ensemble results described above, scoring <strong>0.784 at public LB and 0.801 at private LB</strong>. This is equivalent to <strong>5th place</strong> without the denoising technique (leak) we will mention below.</p>
<h2>Denoising (or leak 😉?)</h2>
<h3>Internal denoising</h3>
<p>( <a href="https://www.kaggle.com/iafoss" target="_blank">@iafoss</a> <a href="https://www.kaggle.com/analokamus" target="_blank">@analokamus</a> )<br>
<a href="https://www.kaggle.com/code/iafoss/creative-denoising-part-of-top-3-solution/notebook" target="_blank">This notebook</a> explains our denoising algorithm.</p>
<p>Four days before the end of the competition we realized that the nonstationary noise test data was generated by taking a large single chunk of real detector noise data, adding some white noise, a random selection of a subset of time_ids, sorting them, assigning randomly selected time_stamps, selection of the frequency range, and adding the signal. The frequency also appeared to be shifted by an integer number of Hz.</p>
<p>We went through all nonstationary noise test samples, performed a search between overlapping frequencies as all vs all bin match, found identical bins, and computed abs1 - abs2, which gives a noise-free signal. </p>
<p>One challenge after denoising is that we need to detect the origin of signal from the matched pairs. We summed spectrogram along time axis of the denoised image (abs1 - abs2), then calculated positive area and negative area. Big positive area suggest that the signal is from abs1 and negative area vice versa. Applying an adequate threshold made it possible to detect the origin of signal very accurately (<strong>Figure 2</strong>).</p>
<p><strong>Figure 2</strong>. (left: denoised image, right: positive/negative area, top: no signal, middle: one signal, bottom: two signals)<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1973217%2Fe444b75a9363396a58752932747aa88a%2F546913595-b7d0f04b5.png?generation=1672917875266319&alt=media"><br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1973217%2Fb66a526d15fbf0547a1b60b31a211ff9%2F1209123fd-d62fb035d.png?generation=1672917955287888&alt=media"><br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1973217%2Fd209d9808da5996c97f01472895d6128%2F014065fac-73d5e4fc4.png?generation=1672918045232649&alt=media"></p>
<p>We were able to denoise approximately 1100 samples, though in most cases the frequency coverage is partial. Therefore, the denoised data cannot give 100% confidence regarding the labels. We interpolated the prediction and denoised prediction based on the coverage. Overall this denoising improved our best ensemble from 0.784 to 0.807 at public LB, from 0.801 to <strong>0.826</strong> at private LB. </p>
<h3>External denoising</h3>
<p>( <a href="https://www.kaggle.com/bakeryproducts" target="_blank">@bakeryproducts</a> ) WIP</p>
<h2>Things did not work</h2>
<ul>
<li>Noise to Noise denoising</li>
<li>Noise to Void denoising</li>
<li>Diffusion models</li>
<li>Advanced stacking of the models with adding extra features</li>
</ul>
<p>Thank you! Your team's solution, backed by solid domain knowledge, was very elegant and informative. It was an honour to compete with you.</p> |
G2Net Detecting Continuous Gravitational Waves | 4th place solution: dynamic programming and ensembling | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: G2Net Detecting Continuous Gravitational Waves
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Many thanks to the organizers and to Kaggle for hosting this very interesting competion! We also thank all competitors for the discussions and the exchange of ideas. We learned a lot about CW physics and spectra processing. We are sorry that we did not use a machine learning method. We actually tried but our little experience did not lead to anything satisfying :-)</p>
<p>We quickly found different ways to generate solutions and looked for the best way to ensemble them. But we haven't found an obvious way with the AUC ROC function ranking paradigm. Our idea then was to have a single judge function to evaluate homogeneously all the methods. Each method tried to determine the position of the CW curve in SFT, defined by 4 parameters (F0, F1, alpha, delta). Then, the judge function kept, for each test case, the parameters leading to the best metric score among all results.</p>
<p>Thus, we have accumulated the results of a few methods executed with different numerical parameters. It is interesting to notice that the best contributions were distributed among the various methods. A disadvantage of this judge function was that it required all participating methods to be able to provide the position of the CW curve with all four parameters.</p>
<p><strong>CW parameters generators</strong><br>
A set of algorithms are run on each test case to try to find candidate CW parameters for the potential signal present in the spectrogram. They differ mainly in the processing blocks variants and their parameterization. The main steps were:</p>
<ul>
<li>first use dynamic programming to sketch a rough estimate of the signal curve,</li>
<li>then infer the approximate associated CW parameters (f0, f1, and source direction),</li>
<li>finally adjust the parameters to the signal with an optimisation algorithm (we used the Nelder-Mead optimizer of scipy).</li>
</ul>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3860901%2F8afea205d10d3ba1903acbc41a98eb69%2Fimage.png?generation=1672952232224869&alt=media" alt=""></p>
<p><strong>Coarse estimator (dynamic programming)</strong><br>
Normalized L1 and H1 spectrograms are merged: abs(SFT_H1)² + abs(SFT_L1)²<br>
On some variants, non-stationary noise correction is applied to reduce the impact of artifacts.<br>
A frequency densification x2 or x4 is also optionally considered (e.g. sft[1::2, :] = (sft_[:-1, :] - sft_[1:, :]) / np.sqrt(2) on the complex SFTs for the x2 version).</p>
<p>Two main dynamic programming variants were considered, gradually building the best curve with increasing time step:</p>
<p><strong>1) Horizontal binning</strong><br>
In this variant the spectrogram is binned horizontally over each timestep and the curve is allowed to go up or down up to one frequency bin per timestep. Only one inflexion point is allowed.</p>
<p><strong>2) Restricted slope variations</strong><br>
In this variant the signal is interpolated along segments with fixed slopes. The slope range is determined by the maximum expected doppler for the frequency band and by the f1-range considered, and the slope variations between two time steps are clamped. Only one inflexion point is also allowed.</p>
<p><strong>Parameters estimation</strong><br>
A parametric fit of the curve obtained with the previous step gives us an approximate curve in the form (f0 + f1 * t) * (1 + A * cos(phi + w * t))<br>
w corresponds to the mean motion of the Earth in an intertial frame, and A is limited to taking into account its maximal inertial velocity.<br>
Then two possible alternative source positions (on each side of the ecliptic) are derived from these intermediate A and phi parameters.</p>
<p><strong>Parameters optimization</strong><br>
We then try to optimize these sets of (f0, f1, alpha, delta) CW parameters with a Nelder-Mead algorithm.<br>
At this step we go back to the original complex L1 and H1 SFTs. As illustrated in more detail in the next section, the signal is integrated independently for both detectors along the parametric curves, and the minimum value min(L1, H1) is considered for the optimization. A simplified interpolation is performed in the complex SFTs considering only 2 enclosing points in place of the full vector considered in the judge. With a cos i = 1 assumption, an amplitude ponderation is applied along the time axis during the signal integration, and an optional weighting is applied to cope with noise artifacts.<br>
Only the best of the two possible source directions is kept after this step.</p>
<p><strong>Signal strength evaluation ("judge")</strong><br>
The judge evaluation works on the raw complex spectra H1 and L1. We wanted to determine a SNR on both indenpendently. Note that what we call SNR is probably quite different of the definition one can find in PyFstat.</p>
<p>Here are the main steps of the judge algorithm :</p>
<p>For every time slot (spectrum column):</p>
<ul>
<li>determine the expected frequency using the 4 parameters (F0, F1, alpha, delta),</li>
<li>compute a weighted sum of the pixels around (weight and phase are calculated using the window FFT)</li>
</ul>
<p>Sum the square modulus obtained for each time slot (weighted by the inverse variance of the columns, and by the amplitude modulation expected for a cos i = 1 signal).</p>
<p>We determined empirically (on public score) that the best metric to keep was min(SNR_H1, SNR_L1), but the sum SNR_H1 + SNR_L1 performed better on the train dataset. Also, 2 * min(SNR_H1, SNR_L1) + max(SNR_H1, SNR_L1) seems to have an important positive gap on the private dataset (our best submission so far scores 0.821 on private but we did not keep it as it was scoring 0.807 on public score).</p>
<p>We noticed that the scores of the high frequency spectra were significantly lower in average. Our research method probably performed worse on it due to the daily Doppler implying a 1-pixel frequency variation. Applying a score correction of the linear regression made us gain between 0.002 and 0.005 points on submissions.</p>
<p>Thanks again to everyone for this journey. We are impressed by the diversity of the various methods used to tackle this problem :-)</p> |
G2Net Detecting Continuous Gravitational Waves | 5th place solution: Stack-sliding and Differential Evolution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: G2Net Detecting Continuous Gravitational Waves
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Great, happy to hear that!</p>
<p>Sorry for late reply, we still haven't prepared the repository. We will post it here for sure. I have a twitter account: <a href="https://twitter.com/ITimiryasov" target="_blank">https://twitter.com/ITimiryasov</a>, maybe I need to start twitting :) </p>
<p>Thank you!</p>
<p>Yes, we are planning to open source it and maybe writing a more detailed post. It will take some time to cleanup the code though.</p>
<p>We want to thank the organizers and the other participants of this great event! This is our first time on Kaggle and we are happy to win the Gold medal.</p>
<p>We are theoretical particle physicists working mainly on Heavy Neutral Leptons (hence the team name with the same acronym) with no background in Gravitational Wave physics. The initial plan was to improve our ML skills, but we turned eventually to a physics-based approach.</p>
<h2>The idea of the Method</h2>
<p>It is easy to find a continuous signal – it is just a peak in a Fourier transform.<br>
However, due to the Doppler modulation and spin-down of the neutron star, the signal is spread over multiple frequency bins. <br>
Our method (very similar to Jun Koda's solution) aims to streamline the modulation curve, as in the figure below </p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F12396317%2F5266fae1a41a27c09a6a22684e2b575c%2FStackSlide.png?generation=1672836415306719&alt=media" alt=""><br>
(figure from this <a href="https://arxiv.org/abs/2206.06447." target="_blank">arxiv.org/abs/2206.06447</a>)</p>
<p>The modulation pattern depends on the position of the source (alpha, delta) and the time derivative of the frequency F1, see the <a href="https://github.com/PyFstat/PyFstat/blob/master/examples/tutorials/1_generating_signals.ipynb" target="_blank">pyFstat tutorial</a>.</p>
<p>In fact, the method is not new and has been used by the GW community under the name of StaSlide [1]. Once the individual SFTs are shifted so that the signal is located in one frequency bin, we simply sum their powers (absolute values squared). If the modulation pattern mismatches the actual one slightly, the signal is spread across several bins which drastically reduces the sensitivity. Therefore, one has to scan over a very fine grid in the parameter space. The method is insensitive to any gaps in the timestamps. It is also rather robust against non-stationary noise, but very sensitive to instrumental lines.</p>
<h2>Implementation</h2>
<p>For this challenge, we have implemented the method from scratch, first in python and then in Julia (the optimized Julia code gives a ~240x speed-up compared to a naive python implementation).</p>
<p>The processing followed these steps:</p>
<ul>
<li>Normalize the data. |SFT|² / std(Re(SFT)) worked just well. We tweaked this a bit when a strong instrumental line was detected by the algorithm, but it didn’t seem to affect the performance. Normalized this way, the gaussian noise will follow chi2 distribution and the signal will follow non-central chi2. </li>
<li>For every sample, scan over the parameter space (alpha, delta, f1) and find the maximum power. Scanning over alpha and delta with sufficient resolution takes ~20 s per sample, so scanning over f1 was too time-consuming for us. So we used <a href="https://github.com/robertfeldt/BlackBoxOptim.jl" target="_blank">differential</a> <a href="https://en.wikipedia.org/wiki/Differential_evolution" target="_blank">evolution</a> with the objective - max ( Power_L1 + Power_H1). Summing the powers from two detectors greatly improved our LB (from 0.747 to 0.804).</li>
<li>During the scan, our algorithm analyzed the data to isolate potential glitches. Improving this algorithm slightly improved our score.</li>
<li>Final predictions were made by simply applying the logistic function to the max Power. AUC score is invariant under reparametrization, so the parameters of the logistic function do not matter.</li>
</ul>
<p>Processing the test set takes around 10 hours on a 5-year-old Linux desktop machine (on 8 cores).</p>
<h2>What could have been improved</h2>
<ul>
<li>Amplitude modulation. The signal intensity depends on the position of the detectors compared to the source, and an extra phase in a complicated way. We wanted to sum the stacks with weights proportional to that amplitude modulation but didn’t have time to implement that properly. If we understand correctly, Jun Kodo used amplitude modulation. Simpler filters (daily/twice-daily modulation \( \propto \exp(2πi t/T) \)) did not perform well.</li>
<li>Maybe we concentrated too much on isolating glitches, which make up at most 2% of the test set.</li>
<li>Optimal filtering. We noticed the signal leakage to the nearby frequency bins due to the finite time of short SFTs. Mitigating it with optimal filtering is a great idea, which put Jun Kodo in the 1st place. We initially tried to filter the SFTs when investigating the use of CNNs, however, we did not have time to revive this effort for stack-sliding.</li>
</ul>
<h2>Earlier failed attempts</h2>
<p>Like a good Kaggle beginner, we initially jumped at the most high-tech solution possible: we wanted to use a Transformer applied to time series. We realized that there is an existing method to search for CWs that generates sequential data: Viterbi tracks [2].</p>
<p>After this attempt failed, we then decided to temper our expectations and go for a known and tested method: convolutional neural networks (in particular we searched for noise-resilient CNNs). Here we encountered a number of problems. First, the timestamps are not nicely aligned on a grid, and the SFTs contain a large number of gaps and overlaps. The number of timesteps is also too large to feed into a typical CNN architecture. We realized that we would need to resize our input, and we tried to find a clever way to do so, that didn’t penalize our sensitivity too much (having worked previously on resonant particle searches, we were all well-aware of the importance of maintaining the best possible resolution). To this end we tried a number of filters to match the daily amplitude and frequency modulations before max-pooling the SFTs for each day.</p>
<p>We had limited success here: we managed to make some hidden CWs much more visible to the human eye, but when we tried to use this method to produce the CNN input, we encountered a much bigger problem: the test set was significantly out-of-distribution compared to the training set, with a number of test samples containing strong glitches (see for instance <a href="https://www.kaggle.com/competitions/g2net-detecting-continuous-gravitational-waves/discussion/364854#2026621" target="_blank">these</a> <a href="https://www.kaggle.com/competitions/g2net-detecting-continuous-gravitational-waves/discussion/364854#2022856" target="_blank">posts</a> and <a href="https://www.kaggle.com/code/vslaykovsky/g2net-winning-strategy-with-external-data" target="_blank">this notebook</a>). We realized that we would need to generate our own training set if we wanted to apply any machine learning method. This gave us the impetus to look at algorithmic methods that do not require a training set.</p>
<p>We also tried more elaborate methods of normalizing data, like the rolling median normalization, but it didn’t improve our results and seemingly increased the look-elsewhere effect. </p>
<h2>References</h2>
<ol>
<li>LIGO Scientific Collaboration, <em>All-sky search for periodic gravitational waves in LIGO S4 data,</em> Phys.Rev.D 77 (2008) 022001, Phys.Rev.D 80 (2009) 129904 (erratum), <a href="https://arxiv.org/abs/0708.3818" target="_blank">arXiv:0708.3818</a></li>
<li>Joe Bayley, Graham Woan, Chris Messenger, <em>Generalized application of the Viterbi algorithm to searches for continuous gravitational-wave signals,</em> Phys.Rev.D 100 (2019) 2, 023006, <a href="https://arxiv.org/abs/1903.12614" target="_blank">arXiv:1903.12614</a></li>
</ol> |
G2Net Detecting Continuous Gravitational Waves | 6th place solution (Simulated Annealing Approach) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: G2Net Detecting Continuous Gravitational Waves
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, thanks to the host and congratulations to the winners.<br>
I'm so glad to win my first solo gold medal in this competition!<br>
My solution is based on <a href="https://en.wikipedia.org/wiki/Simulated_annealing" target="_blank">Simulated annealing</a> (SA), a technique often used in optimization problems. It is NOT based on machine learning/deep learning/data generation.</p>
<h3>Preprocess</h3>
<p>The following process was performed for each of H1 and L1 and added together to produce a 360*256 image.</p>
<ul>
<li><p>Calculate the square of the amplitude</p></li>
<li><p>Removal of horizontal line noise (only real noise)</p>
<ul>
<li>Calculate the sum for each row and create a list of 360 elements. Then take a difference of this list. If there is a series of large plus value and large minus value with short intervals, it is assumed that there is horizontal line noise. Then the values of these rows is replaced by the average of the each column. This operation is repeated until it can no longer be done.</li>
<li>This removes almost all of the real noise and improves the score by about 0.005.<ul>
<li>Some example images before and after horizontal line noise removal are as follows.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3026173%2Fdd6cf77558775aabeaf6f8692627cae9%2F1-2.png?generation=1672801107095404&alt=media" alt=""><br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3026173%2Fc5a3f0d84fd30bdfc21ad2f112c7adcc%2F3-4.png?generation=1672801151754877&alt=media" alt=""></li></ul></li></ul></li>
<li><p>Removal of vertical stripe noise</p>
<ul>
<li>Subtract mean for each column, divide by std^2, and correct so that the sum of std of all columns is a constant.</li>
<li>By dividing by the square of std instead of std, the columns with larger original variance will have smaller final variance. The idea is to reduce the contribution of columns with large original variance because they cannot be relied upon. The score increases slightly by this method.</li></ul></li>
<li><p>Filling the time gap</p>
<ul>
<li>Calculate the difference of the timestamps as d and repeat inserting zero-filled columns round(d / 1800 - 1) times.</li></ul></li>
<li><p>Average in the time direction and reduce columns size to 256</p></li>
</ul>
<h3>Wave Detection with SA</h3>
<p>I will explain the essential part of my solution: Simulated annealing (SA).<br>
First, assume that the signal is perfectly sinusoidal and follows the following equation</p>
<p>f(x) = Asin(vx+θ) + h</p>
<p>where A is the amplitude, v is the frequency, θ is the phase, and h is the position in the frequency direction.<br>
The search range for each parameter is set as follows based on the experimental results.</p>
<ul>
<li>A: [-100, 100].</li>
<li>v: fixed at 0.31 (fixing the frequency slightly improved the score)</li>
<li>θ: any real number</li>
<li>h: any real number</li>
</ul>
<p>Then we want to search the parameters that gives the clearest waveform.<br>
To do this, I used SA, as following procedures.</p>
<ul>
<li>The following operation is repeated tens of thousands of times with decreasing the temperature.<ul>
<li>Changes the above parameters slightly, and calculates the sum of the squares of the amplitudes along the waveform as the score (penalty if it refers outside the image). If the score is greater than the last score, accepts the change. Even if the score is smaller than the last score, accepts the change probabilistically, depending on the score difference and the annealing temperature. Otherwise rejects the change. </li></ul></li>
<li>In addition, to prevent getting a score of local optimum, the above whole process is repeated about several hundred times, and the kth percentile value (k chosen from around 90~100) is used as the final score.</li>
<li>It takes several hours to a day (depending on the number of iterations) on my CPU (Core i9-9900K) to calculate the scores of all the test data.<ul>
<li>Postprocessing and submitting this can achieve LB score above 0.79 ,with a few hours calculation.</li></ul></li>
</ul>
<p>Some example images of detected waveforms for some difficult test cases are as follows.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3026173%2F664a106bc0a7a3a394bc78c55a1d80f1%2F5.png?generation=1672801516897133&alt=media" alt=""><br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3026173%2Feaa5412252d039795d351203575fd2d1%2F6.png?generation=1672801527565415&alt=media" alt=""><br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3026173%2F344d212d77257fd883c702ea2bfd7780%2F7.png?generation=1672801537224085&alt=media" alt=""></p>
<h3>Postprocess</h3>
<p>I performed a number of SAs with various numbers of iterations, parameter search ranges, wave thickness, etc., and calculated the ensemble score by weighted averaging.<br>
I gave up on calculating the CV score, and adjusted the weights so that Public LB would be higher.<br>
In addition, the real noise data tended to score slightly higher than the generated noise data (due to its tendency to overfit against noise). Therefore, I first converted the final score to a rank, and then divided the rank value by about 1.1 for real noise data only, improving the score by several points.</p> |
G2Net Detecting Continuous Gravitational Waves | 8th place solution: matched filter and CNN | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: G2Net Detecting Continuous Gravitational Waves
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you.</p>
<p>Initially, we tried a fully-coherent search. However, as you pointed out, fully-coherent search requires near-perfect (e.g. 99.99999%) parameter estimation to achieve a "spiked" result. Therefore, we decided to use a semi-coherent search with a large nsegs.</p>
<p>You can find a comparison of semi and fully coherent search results on page 4 of the following slide by Rodrigo Tenorio, host of this competition:<br>
<a href="https://www.uv.es/igwm2021/slides/Rodrigo_Tenorio.pdf" target="_blank">https://www.uv.es/igwm2021/slides/Rodrigo_Tenorio.pdf</a></p>
<p>We also tried using the MCMCSemiCoherentSearch method, but it was not as competitive as a normal semi-coherent search.</p>
<p>Thank you! Congratulations to you too.</p>
<p>To analyze the test set, we spent a maximum of 10 hours per sample. If the result was greater than 4600, we stopped the search for that sample.<br>
It took around two weeks to run the entire test set in real time using 96 vCPU machine.</p>
<p>Congratulations to the winners, and I would like to express my gratitude to the organizers and my teammate.</p>
<h1>Summary</h1>
<ul>
<li>Matched filter using PyFstat</li>
<li>CNN with pseudo-label and synthesized data</li>
<li>Ensemble MF and CNN results.</li>
</ul>
<h1>Matched filter</h1>
<p>Matched filter (MF) is a highly competitive method for searching continuous waves. PyFstat provides a MF module, which we used for this competition. However, PyFstat (and its underlying LALpulsar) requires a specific file structure (.sft file) that is not compatible with hdf5 or ndarray. We converted the provided data to the sft file. <br>
When provided with four values: f0, f1, alpha, and delta, matched filter calculates a "twoF" value, which is the statistical measure of "likelihood" of the target wave's existence, </p>
<ul>
<li>We used PyFstat's <a href="https://pyfstat.readthedocs.io/en/latest/pyfstat.html#pyfstat.core.SemiCoherentSearch" target="_blank">SemiCoherentSearch</a> with "nsegs=1000". <br>
Here, we used relatively large "nsegs" value to smooth the function shape of "twoF" w.r.t. parameters f0/f1/alpha/delta. This trick helps find the existence of target waves in much less grid search trials.</li>
<li>We conducted a grid search for alpha and delta.</li>
<li>We used Optuna to search for f0 and f1 in order to maximize the towF value.</li>
<li>We performed around 600 Optuna explorations per sample.</li>
<li>You can find more information about our implementation here: <br>
<a href="https://www.kaggle.com/code/iiyamaiiyama/g2net-pyfstat-matched-filter" target="_blank">https://www.kaggle.com/code/iiyamaiiyama/g2net-pyfstat-matched-filter</a></li>
</ul>
<h2>CV result</h2>
<p>The matched filter histograms for the training data (600 samples) are shown. The right image is an enlarged version of the left image. As you can see, if MF result(twoF) is greater than 4600, the precision is 100%. Therefore we set 4600 as the threshold value.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2304617%2Fe87236cd94cbd61d554ccf9fca31a494%2Fsol1.png?generation=1672925392581375&alt=media" alt=""></p>
<p>The AUC of the 600 samples is 0.8406. <br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2304617%2Ff4b47e343fa5b288fc78ae0eaa4d89a2%2Fsol2.png?generation=1672925412196942&alt=media" alt=""></p>
<h2>LB result</h2>
<p>If we submitted only the MF results, the private/public leaderboard score was 0.764/0.760, which is not a very competitive result. We believe it may be due to the non-stationary noise. Therefore, we decided to ensemble MF and CNN results to improve our score.</p>
<h1>CNN</h1>
<h2>dataset</h2>
<p>The training data only contains 600 samples. We had to deal with it.</p>
<h3>Stationary noise</h3>
<ul>
<li>Pseudo-label using matched filter<br>
We selected samples with MF results greater than 4600 from the test set. Around 1700 samples.</li>
<li>Synthesized data<br>
We synthesized new data from train1 and train2 by using the following formula: <br>
<code>new_data = train1 * alpha + train2 * beta</code> <br>
(alpha and beta satisfy the condition <code>alpha^2 + beta^2 == 1</code>)</li>
</ul>
<h3>Non-stationary noise</h3>
<p>We used the following notebook as a reference for creating our own new data. <br>
<a href="https://www.kaggle.com/code/vslaykovsky/g2net-pytorch-generated-realistic-noise" target="_blank">https://www.kaggle.com/code/vslaykovsky/g2net-pytorch-generated-realistic-noise</a> <br>
After creating the new data, we augmented the data by synthesizing the two data as well as the stationary noise data.</p>
<h2>Model</h2>
<p>We had two separate models: one for stationary noise and one for non-stationary noise. The models themselves were ordinary CNNs. We used data augmentation techniques such as h/v flip, mask and roll.</p>
<h1>Ensemble</h1>
<p>Finally, we ensemble MF and CNN results. For samples with MF result greater than 4600, we assigned a score of 0.9-1.0. For all other samples, we used the CNN scores of 0.0-0.9. By Prioritizing samples with 100% precision and placing less confident CNN results afterwards, we can maximize the AUC score.</p> |
G2Net Detecting Continuous Gravitational Waves | 9th Place Solution; Simple CNN Approach | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: G2Net Detecting Continuous Gravitational Waves
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>please refer to<br>
<a href="https://github.com/qubvel/segmentation_models.pytorch/blob/master/segmentation_models_pytorch/decoders/unetplusplus/model.py" target="_blank">https://github.com/qubvel/segmentation_models.pytorch/blob/master/segmentation_models_pytorch/decoders/unetplusplus/model.py</a></p>
<p>Congrats to all prize and medal winners!<br>
Because the top teams got very high scores, I look forward to learning what the magic was! Anyway, I briefly introduce my simple CNN-based solution here.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F745525%2F9a2c42a8bcb46207e2bfb24013a5753f%2Fg2net.png?generation=1672792259750964&alt=media" alt=""></p>
<h2>Training data</h2>
<p>Generating training data is very important in this competition because the given training data consists of only simulated (stationary gaussian) noises while test data contains real noises. In order to reflect the test data, I generated two types of noises; stationary noise and time-varying noise.</p>
<ul>
<li>Stationary noise is simply drawn from the Gaussian distribution, whose mean and std are estimated from training data.</li>
<li>Time-varying noise is also drawn from the Gaussian distribution but its mean and std are vary with time. These parameters are calculated from test images.</li>
<li>In addition, to simulate real noise, random walk and line noise is added to time-varying noise.</li>
<li>Then, signal is inserted with a probability of 0.5.</li>
<li>Finally, by deleting data at multiple timestamps, timestamp gaps in train and test data are reproduced (for both stationary noise and time-varying noise).</li>
</ul>
<p>The key point here is that all training data is generated online except signals. Only signals (without noise) are created with pyfstat beforehand.</p>
<h2>Model</h2>
<p>I trained a UNet model to predict signal positions in time-frequency domain in addition to predicting signal existence for better supervision.</p>
<p>I created 20k signals beforehand, and training data for one epoch consists of 20k noise+signal and 20k noise. The model was trained for 180 epochs (it took about 18 hours). Thus, the model saw 40k * 180 examples.</p>
<p>Thx!</p>
<ol>
<li>UnetPlusPlus with EfficientNetV2-s backbone. You can easily create this segmentation model with classification head as:</li>
</ol>
<pre><code>from segmentation_models_pytorch import UnetPlusPlus
UnetPlusPlus("tu-efficientnetv2_s", classes=1, in_channels=2, aux_params=dict(classes=1))
</code></pre>
<p>Best single classification model: private LB=0.786 vs. best single segmentation model: private LB=0.789.</p>
<ol>
<li>If I trained the model using only dice loss, some post processings are needed to calculate score from the segmentation results (or train the other model as you mentioned). Of course, these can be done, but I did not try. For the loss function, I just add the two losses.</li>
</ol> |
G2Net Gravitational Wave Detection | 10th Place Solution: CWT->1D Conv + CNN | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: G2Net Gravitational Wave Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all I want to thank the host and Kaggle for organizing this amazing competition. We learned a lot in this two months. </p>
<h1>Brief summary of my part</h1>
<h3>Pipeline</h3>
<p>CWT/CQT is a kind of 1D Conv in nature. But it is only one layer and the kernel is too large. So making it trainable directly doesn't work. I used a little trick as below.</p>
<ul>
<li>CWT + Resnet34/EfNet, 488x512, 1/5 fold, train 5 epochs -> valid_auc: 0.87x</li>
<li>Replace CWT with a multiple layers 1D Conv, the same size as CWT, train 20 epochs -> valid_auc: 0.880x</li>
<li>Fine-tune the model with full data, 5 epochs<br>
Single model can get 0.8790/0.8808 (private/public)</li>
</ul>
<p>Using different parameter sets(bandpass, time shift, random channel off etc.) to train 10 models above. Ensemble them all can get 0.8803/0.8818 (private/public)</p>
<p>Then ensemble with my other models, the result is 0.8816/0.8828 (private/public) Unfortunately we didn't choose it as our final score.</p>
<h3>Augmentation</h3>
<ul>
<li>Trim the original wave to (3, 3904), Random shift +- 65</li>
<li>Random shift each channel +- 5</li>
<li>Random turn off one channel</li>
<li>Random switch channel of two waves (target=0 only)</li>
</ul>
<h3>Others</h3>
<ul>
<li>Using Tensorflow and TPU</li>
<li>CWT/CQT conversion is on-the-fly</li>
</ul>
<p>Yes, I set each channel to 0 with the probability of 0.1. I tested it at very early stage, but with only 1/8 of data. It can prevent my model from overfitting a little.<br>
BTW, it doesn't work on 1D CNN model. </p> |
G2Net Gravitational Wave Detection | 11th place brief solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: G2Net Gravitational Wave Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>hi, im a student learning Computer science/Statistics at university and im new to Kaggle.✋<br>
I don't really know much about GW/signal processing/ or other deep learning skills(augmentation, stacking,etc) so I just mainly stuck to modeling(1D CNNs).</p>
<p><strong>preprocessing:</strong><br>
1) bpf 30-500<br>
2) bpf 25-1020</p>
<p><strong>augmentation:</strong><br>
mixup</p>
<p><strong>models:</strong><br>
single 1DCNN model v1 -> cv 8800 lb 8800<br>
single 1DCNN model v2 -> cv 877x lb ?(didn't try)</p>
<p>v1 + v2 average ensemble -> cv 882x public lb 882x (!)<br>
v1 + v2 with preprocessing1,2 -> cv 883x public lb 8830<br>
using whole dataset -> public lb8833<br>
stacking with a single cqt model in public notebook(cv875x, thanks to <a href="https://www.kaggle.com/ragnar123" target="_blank">@ragnar123</a>)<br>
-> public lb8836</p>
<p>for 1DCNN models, I used residual connection and dilated convolution/standard convolution with ~24 layers, ~2M parameters.<br>
I tried to figure out why v1+v2 ensembling gets so much improvement and tried to build a model with both models' advantages but I couldn't due to lack of time.</p>
<p>so glad for learning so many things/achieving a gold medal in my very first DL competition!<br>
And special thanks to <a href="https://www.kaggle.com/hidehisaarai1213" target="_blank">@hidehisaarai1213</a> and <a href="https://www.kaggle.com/miklgr500" target="_blank">@miklgr500</a> for providing such a great notebook to start with:)</p> |
G2Net Gravitational Wave Detection | 12th place solution [SiN Nakaism924] | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: G2Net Gravitational Wave Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I don't know the exact difference in scores from regular mixup because I used that trick from the beginning, but using mixup prevented overfitting during long training.<br>
Also, considering the possibility that this trick was negatively affecting the training, I reduced the frequency of its use from 50% to 15%, and the cv score decreased by about 0.0002.</p>
<p>First of all, I would like to thank my teammates( <a href="https://www.kaggle.com/naoism" target="_blank">@naoism</a>, <a href="https://www.kaggle.com/hidehisaarai1213" target="_blank">@hidehisaarai1213</a>, <a href="https://www.kaggle.com/yasufuminakama" target="_blank">@yasufuminakama</a>, <a href="https://www.kaggle.com/sinpcw" target="_blank">@sinpcw</a> ) for competing with me. I would also like to thank kaggle and EGO for organizing an interesting competition.<br>
I explain my approach. My teammates will add their own approaches in the comments!</p>
<ul>
<li>Y.Nakama part ( <a href="https://www.kaggle.com/c/g2net-gravitational-wave-detection/discussion/275356#1529297" target="_blank">https://www.kaggle.com/c/g2net-gravitational-wave-detection/discussion/275356#1529297</a> )</li>
<li>SiNpcw part ( <a href="https://www.kaggle.com/c/g2net-gravitational-wave-detection/discussion/275356#1529331" target="_blank">https://www.kaggle.com/c/g2net-gravitational-wave-detection/discussion/275356#1529331</a> )</li>
<li>Naoism part ( <a href="https://www.kaggle.com/c/g2net-gravitational-wave-detection/discussion/275356#1529412" target="_blank">https://www.kaggle.com/c/g2net-gravitational-wave-detection/discussion/275356#1529412</a> )</li>
<li>Hidehisa part ( <a href="https://www.kaggle.com/c/g2net-gravitational-wave-detection/discussion/275356#1529639" target="_blank">https://www.kaggle.com/c/g2net-gravitational-wave-detection/discussion/275356#1529639</a> )</li>
</ul>
<h2>hirune part intro</h2>
<p>I joined this competition after the SETI competition was over. This task is very similar to the SETI competition task. This task is very similar to the SETI competition task, so I reused most of the SETI code.</p>
<h3>preprocessing</h3>
<ol>
<li>Divide the all wave form by 4.6152116213830774e-20 (max(abs) of the entire train and test data).</li>
<li>Use nnAudio to run CQT. Randomly select one of flattop, blackmanharris, or nuttall to the window. This can be used in TTA. Finally, I also added CWT to this.</li>
<li>The spectrograms are combined in the frequency direction and input to the model as a single-channel image</li>
<li>Resize to 384x512 before inputting into the model, and normalize the spectrogram of the entire data set in mean, std</li>
</ol>
<h3>Augmentation</h3>
<ul>
<li>Before converting to a spectrogram, mixup as follows.</li>
</ul>
<pre><code>x = x1 + x2
y = y1 + y2 -(y1*y2)
</code></pre>
<ul>
<li>Randomly roll shift in time direction</li>
</ul>
<h3>train setup</h3>
<ul>
<li>model: timm tf_efficientnet_b4_ap</li>
<li>optimizer: Adam</li>
<li>Lr: 0.001</li>
<li>scheduler: CosineAnnealingLR</li>
</ul>
<h3>pseudo label</h3>
<p>Using pseudo-labels showed some improvement, but not a lot.</p>
<h3>stacking</h3>
<p>Using other team members models, and finally stacking 137 models with NN or XGB, there was a significant score increase. Other models include 1dcnn, swin transformer and efficientnet b5-8.<br>
Team members will explain these details later.</p> |
G2Net Gravitational Wave Detection | 13th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: G2Net Gravitational Wave Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I would like to thank my team members <a href="https://www.kaggle.com/sakshamaggarwal" target="_blank">@sakshamaggarwal</a> <a href="https://www.kaggle.com/darkravager" target="_blank">@darkravager</a> <a href="https://www.kaggle.com/callmeb" target="_blank">@callmeb</a> and <a href="https://www.kaggle.com/mrigendraagrawal" target="_blank">@mrigendraagrawal</a>, organizers and the Kaggle community for the amazing experience we had during this competition.</p>
<h1>Preprocessing:</h1>
<p>thanks to <a href="https://www.kaggle.com/callmeb" target="_blank">@callmeb</a></p>
<ul>
<li><strong>Mean PSD</strong>: We calculated the mean PSD as simple running mean and store it for later normalization:</li>
</ul>
<pre><code>train0 = train[train.target==0].reset_index(drop=True)
DET = None
for idx, (id, target, path) in tqdm(train0.iterrows(), total=len(train0)):
ts = np.load(path)
ts = ts_window(ts)
ts = ts_whiten(ts)
ts = torch.tensor(ts)
fs = torch.fft.fft(ts)
if DET == None:
DET = fs.abs()
else:
DET = (fs.abs() + idx * DET) / (idx+1)
</code></pre>
<ul>
<li><strong>Data whitening</strong> : you can use any whitening you want. This is the one of many solutions we used.</li>
</ul>
<pre><code>WINDOW=signal.tukey(4096, 1/4)[None,:]
def ts_window(ts):
return ts * WINDOW
def ts_whiten(ts, lf=24, hf=364, order=4):
sos = signal.butter(order, [lf, hf], btype="bandpass", output="sos", fs=2048)
normalization = np.sqrt((hf - lf) / (2048 / 2))
return signal.sosfiltfilt(sos, ts) / normalization
</code></pre>
<ul>
<li>After loading the data we <strong>Normalize</strong> the signal as follows:</li>
</ul>
<pre><code>ts = ts_window(ts)
ts = ts_whiten(ts)
ts = torch.fft.ifft(fs).real
fs = fs / DET
ts = torch.fft.ifft(fs).real
(you can also reuse the window here)
</code></pre>
<h1>Augmentations</h1>
<ul>
<li>Random channel shuffle</li>
<li>Random rolling (time shift)</li>
</ul>
<h1>Approaches</h1>
<p>Used stratified 5-fold training strategy. We had some ideas which gave a little boost to its native counterparts. The approaches were:</p>
<ul>
<li><strong>3 Channel (native)</strong> - stacked the detectors channel wise. </li>
<li><strong>6 Permutations</strong> - We horizontally stacked the 3 detectors with different permutations. Then stacked these 6 permutations channel wise.</li>
<li><strong>CQT+CWT</strong> - We took 3 channels(for 3 detectors) from CQT transformation and 3 from CWT transformation and stacked them.</li>
<li><strong>Double CWT</strong> - created two parallel CWT transformations with different parameters, with the idea of better frequency resolution by one and better time resolution by another.</li>
</ul>
<p><strong>NOTE</strong> : <strong>Parallel</strong> keyword denotes a modified architecture where we trained two CNN backbones simultaneously and concatenated their GAP’s output followed by two FC layers.</p>
<p>We trained backbones of different complexities which may use a specific approach or a combination of them. The models also varied with image sizes and frequency range. <br>
<img src="https://drive.google.com/file/d/1cWQpfQw1ghp6ZPqTR6OlmLnPCEe6Sb8r/view?usp=sharing" alt="model list"></p>
<h1>Ensembing</h1>
<ul>
<li>We had around 50 models at the end with our best model scoring 0.8808.</li>
<li>With weighted averaging, we were able to achieve around 0.8823 public LB ( 0.88047 private).</li>
<li>Switching to Stacking Ensembling with sklearn’s MLP classifier significantly boosted our public LB to 0.8830 ( 0.88133 private).</li>
<li>Also tried stacking with different classifiers like LGB, logistic etc and different model selections like top 10, top 20 etc, but they performed relatively poorer in both public and private LB.</li>
</ul>
<p>Special thanks to <a href="https://www.solversclub.com/" target="_blank">Solvers Club</a> for the generous GPU resources they provided. Alone Kaggle resources would not have have suffice for the competition (forever queued TPUs)</p>
<p>Although the shakeup was little, it was enough to bring us down from gold to silver. <br>
Hope had trained 1D CNNs :)</p> |
G2Net Gravitational Wave Detection | 15th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: G2Net Gravitational Wave Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Yes I used global average of only-noise sample. Here is plot for three global average cases.<br>
Using all samples, only positive and only noise.<br>
<img src="https://user-images.githubusercontent.com/61892693/135459283-def629bb-20da-42ab-a215-d016ede30d0b.png" alt="avg_psd"></p>
<p>This is my settings for Log STFT with nn.Audio.</p>
<pre><code> stft_params:
n_fft: 512
win_length: 256
hop_length: 8
window: hann
freq_scale: log #log/linear
iSTFT: False
fmin: 10
fmax: 1024
sr: 2048
output_format: Complex
</code></pre>
<p>After this nn.Audio processings, I converted it to dB scale like <a href="https://librosa.org/doc/latest/generated/librosa.amplitude_to_db.html#librosa.amplitude_to_db" target="_blank">librosa.amplitude_to_db</a> with the following parameters. </p>
<pre><code> amin: 1.0e-8
top_db: 200
ref: 1.0
</code></pre>
<p>After that offset +135dB and finally get a [0, 255] image. </p>
<p>Thanks to kaggle and organizers hosting such a nice competition.<br>
My approach is using a simple Conv2D net like below.<br>
Sadly other complex approaches did not work well on my experiments.<br>
<img src="https://user-images.githubusercontent.com/61892693/135449819-bcc73b16-eb7c-41e1-ba1a-6b870a535dfa.png" alt="model"></p>
<ul>
<li>Whitening: Using average PSD. Averaging over all noise samples for each site.</li>
<li>CQT Scaling with <code>filter_scale = 8/bins_per_octave</code> and (fmin, fmax)=(20, 1024). Both abs and angle part were used.</li>
<li>Augmentation<ul>
<li>Horizontal/time shift<ul>
<li>Pad both side and then horizontal random crop to get time shift image. -> ROC +0.002.</li></ul></li>
<li>Mixup, prevent from overfitting</li></ul></li>
<li>GeM Fixed power 3 was better than the trainable case. -> ROC +0.001</li>
<li>Scores<br>
| net | spec | height | width | PB score |<br>
| --- | --- | --- | --- | --- |<br>
| effnet b0 | Log STFT | 256 | 513 | 0.8760 |<br>
| effnet b0 | CQT | 181 | 513 | 0.8768 |<br>
| effnet b3 | CQT | 181 | 1024 | 0.8797 |<br>
| effnet b3 | CQT | 273 | 1024 | 0.8802 |<br>
My final score is ensemble of Log STFT/CQT models.</li>
<li>My code is available <a href="https://github.com/Fkaneko/kaggle_g2net_gravitational_wave_detection" target="_blank">here</a></li>
</ul>
<p>At my early experiment, heavy overfitting is fixed with Mixup. <br>
<img src="https://user-images.githubusercontent.com/61892693/135468694-7dc429a7-9efd-4379-907e-0e12c5d270f8.png" alt="roc"><br>
This is validation ROC for each epoch. </p>
<p>About whitening I choose this method but I'm still not confident on how to do it properly…</p>
<p>Yeah, These three curves are quite similar. and hard to decide which one is the best.<br>
The following figure is psd distribution on target range 35-350hz. There are some clear separation between target and noise sample. I tried to keep this distribution as possible as I can. <br>
That's why I choose noise curve, but not sure for this effect. And my original motivation for this psd calculation is determining one global psd curve, so I didn't require 100% justification on this point…</p>
<p><img src="https://user-images.githubusercontent.com/61892693/135473960-71e60c21-1a24-4095-b184-cf0a3e21a502.png" alt="Screenshot from 2021-09-30 23-24-08"></p> |
G2Net Gravitational Wave Detection | 19th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: G2Net Gravitational Wave Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you to the host and Kaggle for organizing the competition. Thank you to all my teammates( <a href="https://www.kaggle.com/sunakuzira" target="_blank">@sunakuzira</a> <a href="https://www.kaggle.com/kzkt0713" target="_blank">@kzkt0713</a> <a href="https://www.kaggle.com/keiichimase" target="_blank">@keiichimase</a> <a href="https://www.kaggle.com/kanbehmw" target="_blank">@kanbehmw</a> )and participants. I would like to share a summary of our solution.</p>
<h1>CQT</h1>
<table>
<thead>
<tr>
<th>Model</th>
<th>oof</th>
<th>Private LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>resnet34d</td>
<td>0.8794</td>
<td>0.8797</td>
</tr>
<tr>
<td>tf_efficientnet_b2_ap</td>
<td>0.8802</td>
<td>0.8803</td>
</tr>
<tr>
<td>tf_efficientnetvv2_b1</td>
<td>0.8800</td>
<td>0.8800</td>
</tr>
<tr>
<td>tf_efficientnet_b2_ap</td>
<td>0.8801</td>
<td>0.8800</td>
</tr>
<tr>
<td>resnet34</td>
<td>0.8787</td>
<td>0.8786</td>
</tr>
</tbody>
</table>
<h4>preprocessing</h4>
<ul>
<li>standardization: We computed statistics on standardization using entire data set.</li>
<li>image resize: 276x513 or 207x513</li>
<li>channel add: We added a channel with added and subtracted LIGO waves.</li>
</ul>
<pre><code> waves = np.stack([
waves[0], waves[1], waves[2],
waves[0]+waves[1],
waves[0]-waves[1]])
</code></pre>
<h4>nnAudio.Spectrogram.CQT1992v2</h4>
<pre><code>CQT1992v2(sr=2048, fmin=20, fmax=1024, hop_length=8or4, window="flattop")
</code></pre>
<h4>Augmentation</h4>
<ul>
<li>augmentation by mixed waveforms</li>
</ul>
<pre><code>class CustomDataset(Dataset):
def __init__(self, train, ...):
self.train = train.reset_index(drop=True).copy()
self.labels = train["target"].values
self.train_target0 = train[train["target"] == 0]
...
def __getitem__(self, index):
y_true = self.labels[index]
y_true = torch.tensor(y_true).float()
...
if np.random.rand() > 0.5:
if y_true == 0:
sample = self.train.sample()
else:
sample = self.train_target0.sample()
wave2_path = sample.iloc[0]['file_path']
waves2 = self.load_img(wave2_path)
waves = waves + waves2
if sample.iloc[0]['target'] == 1:
y_true = torch.tensor(1).float()
</code></pre>
<ul>
<li>albumentations</li>
</ul>
<pre><code>albumentations.ShiftScaleRotate(p=0.5, shift_limit=0.0, scale_limit=0.2or0.3, rotate_limit=0)
</code></pre>
<h1>1dCNN</h1>
<table>
<thead>
<tr>
<th>Model</th>
<th>oof</th>
<th>Private LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>1dCNN</td>
<td>0.8753</td>
<td>0.8769</td>
</tr>
</tbody>
</table>
<p>The architecture of 1dcnn used Public Kernel.( <a href="https://www.kaggle.com/kit716/grav-wave-detection/data?select=g2net_models.py" target="_blank">https://www.kaggle.com/kit716/grav-wave-detection/data?select=g2net_models.py</a> )</p>
<h4>preprocessing</h4>
<ul>
<li>standardization: We computed statistics on standardization using entire data set.</li>
<li>band pass: scipy.signal.butter(6, (35, 800), btype='bandpass', fs=2048)</li>
<li>channel add: Same as CQT.</li>
</ul>
<h4>Augmentation</h4>
<ul>
<li>random invert waves</li>
</ul>
<pre><code>if np.random.rand() > 0.5:
waves = waves*-1
</code></pre>
<ul>
<li>albumentations</li>
</ul>
<pre><code>albumentations.ShiftScaleRotate(p=0.5, shift_limit=0.0, scale_limit=0.1, rotate_limit=0)
</code></pre>
<h4>TTA</h4>
<ul>
<li>We used TTA with the same Augmentation as train.</li>
</ul>
<h1>pseudo label</h1>
<ul>
<li>We used a high LB submission.csv as a <strong>soft label</strong>.</li>
</ul>
<h1>weight optimize blending</h1>
<ul>
<li>We used oof to optimize the weights. We used Public Kernel.( <a href="https://www.kaggle.com/itsuki9180/g2net-oof-weight-optimizer" target="_blank">https://www.kaggle.com/itsuki9180/g2net-oof-weight-optimizer</a> )</li>
<li><strong>6model oof : CV: 0.8812, Private LB: 0.8805</strong></li>
</ul> |
G2Net Gravitational Wave Detection | 23rd place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: G2Net Gravitational Wave Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>At first, I would like to thank my team members <a href="https://www.kaggle.com/vladimirsydor" target="_blank">@vladimirsydor</a>, <a href="https://www.kaggle.com/zekamrozek" target="_blank">@zekamrozek</a>, <a href="https://www.kaggle.com/yakuben" target="_blank">@yakuben</a> and <a href="https://www.kaggle.com/uulott" target="_blank">@uulott</a> for their work and organizers and the Kaggle community for the amazing experience we had during this competition.</p>
<h2>TL;DR</h2>
<p>Blend of 6 2D CNN + 3 1D CNN: 1x2D EfficentNet_b0 + 3 x _b3 + 2 x _b5 + 2 x 1D ResNet with transformer + 1D basic CNN</p>
<h2>Details</h2>
<p><strong>Bandpass</strong> was crucial for this data, we experimented with it a lot and tried different setups, those parameters showed better results then others we tried:<br>
<code>fmin=20</code>,<br>
<code>fmax=1024</code> or <code>fmax=500</code> or <code>fmax=600</code> depending on the model,<br>
<code>hop_length=4</code>,<br>
<code>bins_per_octave=12</code>,<br>
<code>filter_scale=0.5</code>,<br>
<code>pad=10</code>.</p>
<p>For images, we used only CQT and didn't experiment with CWT.</p>
<p><strong>Augmentations</strong> that worked for us:</p>
<ol>
<li>Wave augs:</li>
</ol>
<ul>
<li>Random shift channel +-30;</li>
<li>Hard MixUp of waves: <code>new_wave = (wave1 + wave2) / 2</code> and <code>new_target = max(target1, target2)</code>;</li>
<li>Adding Gaussian Noise with small std.</li>
</ul>
<p><strong>Models</strong><br>
For most of the competition, we were training EfficentNets, until around 1-2 last weeks we started experimenting with 1D models.</p>
<p>We found out, that increasing the input image would increase the performance even for EfficentNet_B0, so we experimented with it more. For us best result was when we used <code>stride=(1,2)</code> instead of default <code>(2,2)</code> with an unchanged shape of input. That allowed model to "stretch" the input itself across the frequency axis and give an effect of feeding higher resolution images to the network.</p>
<p>Our 2D models suffered from spoiled gradients in BatchNorm layers on first epochs, so to overcome that we used 2 procedures. The first idea that worked was to substitute BatchNorm for InstanceNorm throughout the whole model. 2nd idea that worked was that we would freeze all encoder layers but BatchNorm, and train only them plus Linear head for the first 2 epochs.</p>
<p>As was mentioned, we started working late on 1D models, mainly simple stacking Conv layers, but after merging with <a href="https://www.kaggle.com/uulott" target="_blank">@uulott</a>, he brought a really promising 1D ResNet model that performed on similar level as our best 2D ones (~0.879 LB).</p>
<p>Finally, all of our models benefited when we started finetuning them on a small learning rate for multiple epochs with augmentations turned off. That boosted us past 0.88 LB and was keeping us in silver as we kept going up LB the more models we fine-tune and blend with.</p>
<p><strong>Ensemble</strong><br>
For ensembling, we used straightforward hyperopt blending. Adding Any 1D model boosted all of our blends.</p>
<p><strong>The final blend</strong> consisted of 6 2D-CNNs and 3 1D-CNNs:</p>
<ol>
<li><p>EfficentNet-B5, successful augs, freeze 2 epochs, normalize after CQT; </p></li>
<li><p>Same B5, but fine-tuned;</p></li>
<li><p>fine-tuned EfficentNet-B3, all successful augs, freeze 2 epochs, normalize after CQT;</p></li>
<li><p>fine-tuned EfficentNet-B3, initial stride=(1,2) GaussNoise aug, freeze 2 epochs, normalize after CQT;</p></li>
<li><p>fine-tuned EfficentNet-B3, initial stride=(1,2) GaussNoise aug, freeze 2 epochs, normalize after CQT;</p></li>
<li><p>fine-tuned EfficentNet-B0 , InstanceNorm instead of BatchNorm, normalize after CQT;</p></li>
<li><p>1D ResNet + Transformer head, filter order 8 and CosineAnnealing;</p></li>
<li><p>1D ResNet + Transformer head, filter order 5 and ReduceLROnPlateau;</p></li>
<li><p>fine-tuned 6-layer CNN1d with MLP head, MixUp and GaussNoise, linear warm-up and cosine decay.</p></li>
</ol>
<p><strong>P.S</strong>. Other things we tried, but it didn't make it to the final blend:</p>
<ul>
<li>drop confident 1s before fine-tuning;</li>
<li>InceptionTime for 1D + Transformer head;</li>
<li>8 layer custom 1D CNN;</li>
<li>rexnet_100;</li>
<li>EfficeintNet-B2 and other;</li>
<li>resize CQT image into (512,512) or (256, 256).</li>
</ul>
<p>Augmentations that we stopped using :</p>
<ul>
<li>Channels shuffle for an image in 2D models;</li>
<li>Horizontal and vertical flip of images;</li>
<li>normal MixUp.</li>
</ul> |
G2Net Gravitational Wave Detection | 2nd Place Solution: trainable custom frontend [EventHorizon] | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: G2Net Gravitational Wave Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Code is now available: <a href="https://github.com/analokmaus/kaggle-g2net-public" target="_blank">https://github.com/analokmaus/kaggle-g2net-public</a></p>
<p>Thank you for the questions. I forgot to add some key information 😫</p>
<ul>
<li>is that for a single fold or k-fold?<br>
Yes, all results above are from 5-fold CV.</li>
<li>does it include TTA, etc<br>
Yes, the same augmentation used in training was used in TTA.</li>
<li>CNN refers to 1d CNN wavegram?<br>
Yes, CNN = 1d-CNN wavegram.</li>
</ul>
<p>Original wavenet with tanh and sigmoid activation converged very slow, that's why I used a simplified version of it (in the figure: "1d-CNN").</p>
<p>First of all, I would like to express deep gratitude to organizers and all the teams for making this competition so interesting and exciting. Also, I want to express a big congratulations to the first place, who dominated this competition with a single ResNet34 model :)</p>
<p>Code is available at: <a href="https://github.com/analokmaus/kaggle-g2net-public" target="_blank">https://github.com/analokmaus/kaggle-g2net-public</a></p>
<h1>Common settings</h1>
<p>num epochs = 8<br>
optimizer = Adam<br>
scheduler = CosineAnnealingWarmRestarts(8)<br>
loss function = BCE<br>
cross validation = target stratified 5 fold cross validation</p>
<h1>Frontend architectures</h1>
<p>Neural network architecture played the most important role for improving the performance.<br>
Here are the frontend architectures I used. <br>
Trainable frontend in general outperformed fixed frontend.<br>
<img src="https://pbs.twimg.com/media/FAfu6LJVEAA8u53?format=jpg&name=medium" alt=""><br>
<img src="https://pbs.twimg.com/media/FAfu7VvUYAYjT1y?format=jpg&name=medium" alt=""><br>
<img src="https://pbs.twimg.com/media/FAfu7_RUUAQM2x4?format=jpg&name=medium" alt=""><br>
<img src="https://pbs.twimg.com/media/FAfu8t2VgAgWiUX?format=jpg&name=medium" alt=""></p>
<h1>Preprocessing</h1>
<p>I applied bandpass filter to all networks. [16, 512] for CWT-CNN and Trainable frontend CNN, [30, 300] for 1d-CNN. Whitening did not work.</p>
<h1>Augmentations</h1>
<p>I tested several types of augmentations on wave and spectrogram, and only a few of wave augmentations worked. I added gaussian noise for 2d-CNN networks, and flipped wave amplitude for 1d-CNN.</p>
<h1>Pseudo-labeling</h1>
<p>Re-training on soft(continuous) pseudo-labelled test dataset improved AUC by ~0.001. Label smoothing during pseudo-label also helped a bit. </p>
<h1>Stacking</h1>
<p>I kept oofs and predictions from all my experiments. <br>
Cross validated Ridge regression model was used to combine the outputs from neural network models. <br>
A constant improvement in both CV and LB was observed as I add more model into the stacking model.<br>
Finally, I run greedy model selection and chose 20(/10/5) models to maximize CV. </p>
<p><strong>Stacking 20 models: CV 0.88283 / Public LB 0.8845 / Private LB 0.8829</strong><br>
Stacking 10 models: CV 0.88270 / Public LB 0.8842 / Private LB 0.8827<br>
Stacking 5 models: CV 0.88242 / Public LB 0.8839 / Private LB 0.8825</p>
<h2>Appendix: all networks</h2>
<p><img src="https://pbs.twimg.com/media/FAf6m7SUUAM2Plk?format=jpg&name=medium" alt=""><br>
<img src="https://pbs.twimg.com/media/FAf6oBSVEAAFwAa?format=jpg&name=medium" alt=""></p>
<p>In order to avoid information leak, the pseudo labels are from the same model. Pseudo labeled test set was added to the training set in each fold.</p>
<p>While a spectrogram shows the intensity of a specific range of frequencies, a CNN frontend can extract information from waveforms more dynamically.</p> |
G2Net Gravitational Wave Detection | 3rd place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: G2Net Gravitational Wave Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you so much <a href="https://www.kaggle.com/yamsam" target="_blank">@yamsam</a>, it was my pleasure to work with u on this challenge!</p>
<p>Thanks </p>
<p>Thanks you <a href="https://www.kaggle.com/anjum48" target="_blank">@anjum48</a>, likewise, it was really great working with you. </p>
<p>Thank you so much <a href="https://www.kaggle.com/theoviel" target="_blank">@theoviel</a>. Even if we got our first gold medals together, for me it took quite longer than for you to become a competition GM. But I finally made it.</p>
<p>Definitely!</p>
<p>Thank you so much <a href="https://www.kaggle.com/urvishp80" target="_blank">@urvishp80</a>. Unfortunately, we missed gold in the previous competition we worked togather, but eventually I received the last gold required for GM.</p>
<p>Thanks so much. 1D models were really super stars in this competitions.</p>
<p>Thank you so much!</p>
<p>Thanks so much. 1D models were really super effective here. Fortunately, ~1.5 weeks before the end of the competition we realized that proper architecture may easily give a quite high performance.<br>
It was quite a strange feeling when after ~12 hours of 1D architecture turning I got better performance than for 2D models after a month of experimenting with them… Architecture turning is really a cool thing. It was a way how I also got my first gold medal.</p>
<p>Thanks so much <a href="https://www.kaggle.com/richx86" target="_blank">@richx86</a> , it was a long journey for me but finally I reached this point. The time to post some datasets/random posts… just joking.<br>
I was happy to work with you on this competition. It was really great teamwork, which at the end got rewarded.</p>
<h3>TL;DR</h3>
<h5>Models we used in our best submission: 15 1D Model + 8 2D Model</h5>
<p><strong>2D Models</strong> [best single model: <strong>0.87875/0.8805/0.8787</strong> at CV, public, and private LB]</p>
<ul>
<li>CQT, CWT Transformation</li>
<li>Efficientnet, Resnext, InceptionV3…</li>
<li>TTA: shuffle LIGO channel</li>
</ul>
<p><strong>1D Models</strong> [best single model: <strong>0.8819/0.8827/0.8820</strong> at CV, public, and private LB]</p>
<ul>
<li>Customized architecture targeted at GW detection</li>
<li>TTA: vflip, shuffle LIGO channels, Gaussian noise, time shift, time mask, MC dropout</li>
</ul>
<p><strong>Training:</strong></p>
<ul>
<li>Pretraining with GW</li>
<li>Training with Pseudo Label or Soft Label</li>
<li>BCE, Rank Loss</li>
<li>MixUp</li>
<li>AdamW, RangerLars+Lookahead optimizer </li>
</ul>
<p><strong>Preprocessing:</strong></p>
<ul>
<li>Avg PSD of target 0 (Design Curves)</li>
<li>Extending waves</li>
<li>Whitening with Tukey window</li>
</ul>
<p><strong>GW Simulation</strong></p>
<ul>
<li>Distribution of Total Mass</li>
<li>SNR injection ratio max(N(3.6,1),1)</li>
</ul>
<p><strong>Ensemble</strong></p>
<ul>
<li>CMA-ES Optimization</li>
<li>Hacking Private LB</li>
</ul>
<h3>Introduction</h3>
<p>Our team would like to thank organizers and kaggle for making this competition possible. Also, I want to express my gratitude to my outstanding teammates <a href="https://www.kaggle.com/vincentwang25" target="_blank">@vincentwang25</a>, <a href="https://www.kaggle.com/richx86" target="_blank">@richx86</a>, <a href="https://www.kaggle.com/anjum48" target="_blank">@anjum48</a>, and <a href="https://www.kaggle.com/yamsam" target="_blank">@yamsam</a> for their incredible contribution towards our final result. In this competition I got the last gold medal required for getting kaggle GM, and also it is the first gold medal for <a href="https://www.kaggle.com/vincentwang25" target="_blank">@vincentwang25</a> and <a href="https://www.kaggle.com/richx86" target="_blank">@richx86</a> . The provided write-up is the result of the cumulative efforts of all our team members, and I just got the privilege to post it.</p>
<h3>Details</h3>
<h4>2D Models</h4>
<p>Like many of the participants, our team initially focused on 2D models. Among the various approaches we tried, we found that whitened signals, which we will discuss later, was the most effective preprocessing for signals. We created the following 2D models using the CQT and CWT images generated from the whitened signal as input.</p>
<p>We also tried various augmentations for 2D images (mixup, cutout, …), but most of them did not work here, however mixup on the input waveform prior to CQT/CWT did help counteract overfitting. The most effective one was the augmentation of swapping LIGO signals. This worked both for training and inference (TTA, Test Time Augmentation) and we found soft pseudo labeling also improved the score. </p>
<p>The performance of our best single 2D model is <strong>0.87875/0.8805/0.8787</strong> at CV, public, and private LB. </p>
<ul>
<li>EfficientNet(B3, B4, B5, B7), EfficientNetV2(M), ResNet200D, Inception-V3 (also we performed a number of initial experiments with ResNeXt models)</li>
<li>CQT and CWT images generated based on the whitened signal</li>
<li>Image size 128 x 128 〜 512 x 512</li>
<li>Soft leak-free pseudo labeling from ensemble results</li>
<li>LIGO channel swap argumentation (randomly swapping LIGO channels) for both training and TTA</li>
<li>1D mixup prior to CQT/CWT</li>
<li>Adding a 4th channel to the spectrogram/scalogram input which is just a linear gradient (-1, 1) along the frequency dimension used for frequency encoding (similar to positional encoding in transformers)</li>
</ul>
<h4>1D Models</h4>
<p>1D models appeared to be the key component of our solution, even if we didn't make great improvement until the last 1.5 weeks of the competition. The reason why these models were not widely explored by most of the participants may be the need of using signal whitening to reach a performance comparable with 2D models (at least easily), and whitening is not straightforward for 2s long signals (see discussion below). However, 1D models are much faster to train, and they also outperform our 2D setup. The performance of our best single 1D model is <strong>0.8819/0.8827/0.8820</strong> at CV, public, and private LB. So <strong>it can reach top-7 LB after ~8 hours of training</strong>.</p>
<p>One of the main contributions towards this result is the proper choice of the model architecture for GW detection task. Specifically, <strong>GW is not just a signal of the specific shape, but rather a correlation in signal between multiple detectors</strong>. Meanwhile, signals may be shifted by up to ~10-20 ms because of the time needed for the signal to cross the distance between detectors. So direct concatenation of signals into (3,4096) stack and then applying a joined convolution is not a good idea (our baseline V0 architecture with CV of 0.8768). Random shift between components prohibits the generation of joined features. Thus, we asked the question, why not split the network into branches for different detectors, like proposed in <a href="https://www.sciencedirect.com/science/article/pii/S0370269320308327" target="_blank">this paper</a>? So the first layers, extractor and following Res blocks, learn how to extract features from a signal, a kind of learnable FFT or wavelet transform. So before merging the signals the network already has a basic understanding of what is going on. We also share weights between LIGO1 and LIGO2 branches because the signals are very similar.<br>
<img src="https://i.ibb.co/4PgdmLY/1Dmodel.png" alt=""></p>
<p>Merge of the extracted features instead of the original signal mitigates the effect of the relative shift (like in Short Time Fourier Transform correlation turns into a product of two aligned cells). So simple concatenation at this stage (instead of early concatenation) followed by several Res blocks (V1 architecture) gives an improvement from <strong>0.8768 to 0.8789</strong>. However, the model after combining the signal and getting a better idea about GW, may still want to look into individual branches as a reference. So we extend our individual branches and perform the second concatenation at the next convolutional block (V2 architecture). It results in a further improvement of CV to <strong>0.8804</strong>. </p>
<p>After the basic structure of the V2 model was defined, we performed a number of additional experiments for further model optimization. The model architecture tricks giving further improvement include the use of SiLU instead of ReLU, use of GeM instead of strided convolution, use of concatenation pooling at the end of the convolutional part. In one of our final runs, we used ResNeSt blocks (Split Attention convolution) having a comparable performance in the preliminary experiments, but it also performed slightly worse at the end of full training. Using CBAM modules and Stochastic Depth modules gave a slight boost and made the 1D models more diversified. The network width, n, is equal to 16 and 32 for our final models. One of our experiments is also performed for a combined 1D+2D models, pretrained separately and then finetuned jointly, which gave 0.8815/0.8831/0.8817 score.</p>
<p><strong>Things that didn’t work:</strong> use of larger network depth (extra Res blocks), use of smaller/larger conv size in the extractor/first blocks, use of multi-head self-attention blocks before pooling or in the last stages of ResBlocks (i.e. Bottleneck Transformers), use of densely connected or ResNeXt blocks instead of ReBlocks, learnable CWT like extractors (FFT->multiplication by learnable weights->iFFT), WaveNet like network structures, pretrained ViT following the extractors (instead of customized resnet).</p>
<h5>Training (1D models)</h5>
<ul>
<li>Pretraining with Simulated GW for 2~4 epochs (<strong>2-8bps</strong> boost)</li>
<li>Training with Pseudo-labeling data for 4~6 epochs depending on the model</li>
<li>Training with rank loss and low lr for 2 epoch (<strong>~1bps</strong> performance boost)</li>
</ul>
<p>TTA based on multiplying the signal by -1 and first/second channel swap gave <strong>~2bps</strong> boost. 64 fold MC dropout gave an additional <strong>~1bps</strong> boost. Also during training in some experiments we used 0.05 spectral dropout: drop the given percentage of FFT spectrum during whitening.</p>
<h3>Preprocessing</h3>
<p>Regarding whitening, direct use of packages, such as pycbc, doesn’t work mainly because of the short duration of the provided signals: only 2 seconds chunks in contrast to the almost unlimited length of data from LIGO/VIRGO detectors. To make the estimated PSD smooth, pycbc package uses an algorithm that corrupts the boundary of data, which is too costly for our dataset, whose duration of signals is only 2 seconds. We reduce the variance of estimated PSD by taking the average PSD for all negative training samples. This is the key to make whitening work (interestingly, this averaging idea came up independently to two of the team members). To further reduce the boundary effect from discontinuity and allow ourselves to use a window function that has a larger decay area (for example, Tukey window with a larger alpha), we first <strong>extend the wave while keeping the first derivative continuous at boundaries</strong>. Finally, we apply the window function to each data and <strong>use the average PSD to normalize the wave for different frequencies</strong>.</p>
<p><strong>For 1D model:</strong></p>
<ul>
<li>Extend the signal to (3,8192)</li>
<li>Tukey window with alpha 0.5</li>
<li>Whitening with PSD based on the average FFT of all negative training examples</li>
</ul>
<p><strong>For 2D model:</strong></p>
<ul>
<li>Using the whiten data and apply CQT transform with the following parameters:</li>
<li>CQT1992v2(sr=2048, fmin=20, fmax=1000, window='flattop', bins_per_octave=48, filter_scale=0.25, hop_length=8)</li>
<li>Resize (128x128, 256x256 …. ) </li>
</ul>
<p><img src="https://i.ibb.co/CtL2nk6/whitening.png" alt=""></p>
<h3>GW Simulation</h3>
<p>This idea is coming from curriculum learning, and in <a href="https://arxiv.org/abs/2106.03741" target="_blank">this paper</a>, it mentioned that “We find that the deep learning algorithms can generalize <strong>low signal-to-noise ratio (SNR) signals to high SNR ones but not vice versa</strong>”, so we follow it and try to generate a signal and inject into the noise with low SNR. Even though there are around 15 parameters, we found that the most important one is the total mass and mass ratio (maybe we are wrong) because it affects the shape of GW the most through eyeballing. So we adjust the total mass and mass ratio using different distributions and inject the signal into the noise with a given SNR following max(random.gauss(3.6,1),1) distribution. This SNR distribution is determined by checking the training loss trend: we want it to follow the trend of original data (not too hard, not too simple). This way gives us a <strong>2~8bps increase</strong> depending on the model we use.<br>
We also tried to follow this idea by giving Hard Positive samples from the train data more weight but due to the time constraints, we didn’t make it work. It could be a potential win here.</p>
<h3>Ensemble</h3>
<p>First, to confirm that train and test data are similar and do not have any hidden peculiarities we used <strong>adversarial validation</strong> that gave 0.5 AUC = train and test data are indistinguishable.</p>
<p>We tried many different methods to ensemble the models and saw the following relative performance trend: <a href="https://github.com/CMA-ES/pycma" target="_blank">CME-ES</a> with Logit > CME-ES with rank> Scipy Optimization > Neural Network > other methods. We also tried to use <code>sklearn.preprocessing.PolynomialFeatures</code> with probability prediction to do the CME-ES optimization. It brings the highest CV and LB but with a little chance of overfitting. We are glad that it turns out to be our best 0.8829 submission.</p>
<p>Our second submission is based on a simulation of the private LB by excluding 16% of our OOF data out of CV optimization. So the produced model weights are more robust to the data noise and potentially can lead to a better performance at the private LB. We do this because we found the CV and LB correlation is very high and we also used adversarial validation to check that indeed they are similar. So We bootstrapped 16% OOF data which has a similar CV to public LB score and <strong>the optimized CV for the remaining data matched the same as the private LB (0.8828 for one of our submissions)</strong>. </p>
<h3>Acknowledgment</h3>
<p>Some of the team members used JarvisLabs.ai for their GPU workstations. The cloud platform is easy to use, stable, and very affordable. The pause functionality is great. Recommend this platform to Kagglers.</p>
<p><a href="https://www.kaggle.com/naoism" target="_blank">@naoism</a>, Thank you so much. Congratulations to u too on getting Competition Master, it's an important milestone. I hope you can get your GM title soon!</p>
<p>Just MS PowerPoint… <br>
During my PhD I realized that it is nearly the best tool to create images for papers/proposals.</p>
<p><a href="https://www.kaggle.com/yvonnef" target="_blank">@yvonnef</a>, thanks so much</p> |
G2Net Gravitational Wave Detection | 4th Place Solution Brief Summary : Magic of 1D CNN | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: G2Net Gravitational Wave Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks <a href="https://www.kaggle.com/nanthennguyen" target="_blank">@nanthennguyen</a> , I think being persistent was the key here haha</p>
<p>The skip connection idea came from MoA competition , there <a href="https://www.kaggle.com/nischaydnk" target="_blank">@nischaydnk</a> has used skip connections with DNNS to get good boost when we tried it here as well</p>
<p>Haha ,congrats to you as well , it was fun to see GOGOGO and OGOGOG together <a href="https://www.kaggle.com/anjum48" target="_blank">@anjum48</a> 😋</p>
<p>Haha that's very sweet of you also loved your zebra augs 🔥</p>
<p>We will share the code and full solution very soon <a href="https://www.kaggle.com/nyleve" target="_blank">@nyleve</a> </p>
<p>Thanks <a href="https://www.kaggle.com/theoviel" target="_blank">@theoviel</a> ❤️</p>
<p>Thanks my friend </p>
<p>It was an honour for us to have competed with you this close and also fun . </p>
<p>Hi all ,<br>
First of all I want to thank the organisers and kaggle team for organising such a wonderful and interesting competition , we learned a lot . <b> I would like to thank <a>JarvisLabs.ai </a> (a GPU cloud based platform offering modern and extremely easy to launch GPU instances) for helping us during the competition by providing modern GPU cards. The platform enabled us to do multiple experiments rapidly with instant GPU instances. All our models were trained on <a href="https://cloud.jarvislabs.ai/">cloud.jarvislabs.ai</a> GPU instances and this could not have been achieved without them. </b></p>
<p>It was really a tough fight , we would have liked to have finished in the money but nevertheless we are really happy with our finish. It was lovely to once again team up with <a href="https://www.kaggle.com/nischaydnk" target="_blank">@nischaydnk</a> <a href="https://www.kaggle.com/benihime91" target="_blank">@benihime91</a><br>
<a href="https://www.kaggle.com/pheadrus" target="_blank">@pheadrus</a> and <a href="https://www.kaggle.com/proletheus" target="_blank">@proletheus</a></p>
<p>Similar to other teams we also started with constant Q Transforms and 2D CNN architectures , we did a lot of experiments with different preprocessing techniques (bandpass, whitening , denoising AE's , CWT ,etc details to be shared later) , this allowed us to reach top 15x .</p>
<p>Thanks to heng and other kaggler's experimentation we realized how well conv1d and sequence models are working on this data and it also intuitively made sense to us , so we started working on that and a custom 1d cnn architecture forms the main backbone of our solution</p>
<h1>1D/Sequence Model Magic</h1>
<p>We are really happy to announce that our single custom 1d architecture model scores <b> 0.8838 public LB / 0.8823 private LB and is in the gold zone alone </b>. (Hoping to write a paper around it)</p>
<p>We started with a rather simple Conv1d architecture with just 8 conv1d layers along with a normal 2x linear head , to our surprise it scored really well cv 0.8766 Lb 0.8788 . This encouraged us to experiment more with sequence models , we tried a mix of LSTM's , GRU's , transformers ,etc but were not able to beat the normal conv1d model . </p>
<p>Finally we decided to train deep conv1d model with residuals (similar to resnet) and it worked like a charm , we then changed the head from linear to LSTM and got further boost . Our final model architecture has the following flow :</p>
<p>GW waves numpy array --> horizontal stacking all three to get (1,4096*3) array --> band pass filtering ---> Deep conv1d backbone with residuals --->LSTM head ---> Prediction</p>
<p>We tried GRU , transformer and bert like heads but LSTM worked best</p>
<h1>2D Model</h1>
<p>Most of our strong 2d models came from <a href="https://www.kaggle.com/proletheus" target="_blank">@proletheus</a> who was at 15th position back then when we merged . He used a mix of augmentations and good normalization technique that gave us a good amount of boost in 2d models . We used both CQT and CWT based models in our final ensemble . </p>
<p>We mainly used nnAudio CQT1992v2 and CQT2010 during the preprocessing with <br>
config. </p>
<pre><code>qtransform_params={"sr": 2048, "fmin": 30, "fmax": 400, "hop_length": 4,
"bins_per_octave": 12, "filter_scale" : 0.3}
</code></pre>
<p>Sequence of Preprocessing is as follows:<br>
Numpy ---> signal tukey ---> band pass filter ---> normalized by norm_by =[7.729773e-21,8.228142e-21, 8.750003e-21] --->CQT--> Augmentations[coloredNoise and shift]</p>
<p>Torch audiomentations were used to apply augmentations. Colored noise augmentation was done channel wise while shift was applied sample wise.</p>
<p>Please note that this is a small gist of our solution/journey .<br>
<a href="https://www.kaggle.com/benihime91" target="_blank">@benihime91</a> will be publishing a detailed solution/explanation with code by tomorrow </p>
<p>Thanks for reading</p> |
G2Net Gravitational Wave Detection | 5th place solution (instead of) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: G2Net Gravitational Wave Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to kaggle, organizers, and community for intense battle at the end and live discussion in the beginning.</p>
<p>Not really a solution post, rather rant on EfficientNets. Solution itself is pretty straightforward, just 2d CNN go brrrrrr.</p>
<h1>TLDR</h1>
<ul>
<li>PSD whitening with precomputed mean PSD for each channel, custom window function : mirrored sigmoid</li>
<li>1d augs: shift, noise, mixup, cutmix, shuffle</li>
<li>2d CQT default settings fmin=16, fmax=1024, hop=12</li>
<li>5xCNN 2d ensemble with TTA, models and EMA's</li>
<li>EfficientNets are not so good (for me).</li>
</ul>
<h1>Intro</h1>
<p>As i don't know anything about competition problem and do not have any experience at 1d-audio-spectrogram-ish type of tasks: I decided to go full research mode on specific NN architecture</p>
<p>Disclaimer: CNN shouldn't be the right fit for spectrograms, but they are. I also tried Transformers, surely they can work, but score-wise i was barely able to <br>
go into 88.+ on LB. I'm still pretty new with transformers thou.</p>
<h1>EfficientNets</h1>
<p>What a original choice, you might say. Indeed, a lot of people uses EN for some reason, but not me. I don't use EN family because it is not … efficient.<br>
It is not much of a secret and there are some papers that are pointing at that (i.e. RegNet, GENet, EffnetV2, etc). But what if I was wrong all that time and <br>
img/sec -to- acc ratio is much better? </p>
<p>It is not.</p>
<p>My best result with EN after a lot of tuning and fitting was easily beaten (not by much) on the first try with RegNet and even ResNet. <strong>And they are 4x times faster.</strong><br>
Of course on some datasets EN will work better, but im still to find one, other then ImageNet1k.<br>
There can be a lot of reasons why it is what is is, my thoughts on that: it is happening because EN is "overfitting" ImageNet1k (can be NAS or compound-scaling)</p>
<p>My best EN ensemble score was 8830 LB, after that I gave up and get ResNets into the mix in last couple of days of the competition.</p>
<p>Thanks for reading and please consider some other arch's if you dont already.</p> |
G2Net Gravitational Wave Detection | 6th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: G2Net Gravitational Wave Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you. 😃</p>
<p>I think so, the trained filter:<br>
<img src="https://i.imgur.com/bUCF7ye.gif" alt=""></p>
<p>tf.keras code:</p>
<pre><code>inputs = Input((3,4096))
x = tf.signal.dct(inputs)
w = x[:1,:1,:1]*0+1
w1 = Conv1D(4096, 1, activation='sigmoid', kernel_initializer=custom_int1, use_bias=False)(w)
w2 = Conv1D(4096, 1, activation='sigmoid', kernel_initializer=custom_int2, use_bias=False)(w)
w3 = Conv1D(4096, 1, activation='sigmoid', kernel_initializer=custom_int2, use_bias=False)(w)
w = Concatenate(axis=1)([w1,w2,w3])
x = Multiply()([x,w])
x = tf.signal.idct(x)/8192.
x = Permute((2,1))(x)
</code></pre>
<h1>model architecture</h1>
<p>wave -> dct -> trainable bp filter -> idct -> 1dcnn/cwt -> 1dcnn/2dcnn/resnet/effnetv2/lstm</p>
<p>best single model:4096x3 -> 1dcnn -> 512x256x3 -> resnet34<br>
(private LB 0.8810, single fold with TTA)</p>
<h1>augmentation</h1>
<ul>
<li>random shift wave separately, up to 1/32 second</li>
<li>random change phase</li>
</ul>
<h1>others</h1>
<p>fine tune on cropped wave (ex. [1536:-256]) for more model blending</p> |
G2Net Gravitational Wave Detection | 7th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: G2Net Gravitational Wave Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Preprocess:<br>
30Hz highpass filtering<br>
Normalized by the absolute means of individual observations</p>
<p>Augmentation:<br>
Roll +/-500 points (p=0.5)<br>
Scale 0.85-1.15 (p=0.5)<br>
Multiply -1 (p=0.5)</p>
<h1>All implemented using TF trained on TPU</h1>
<h1>The bellow frontend is used by all models. dim=128</h1>
<p>input = L.Input(shape=(3, 4096)) # 3 observations<br>
x = tf.reshape(input,(-1,4096,1)) # 3 observations folded into batch<br>
x = wavenet(x, dim, dilations=12, kernel_size=5)<br>
x = L.Dense(dim//4)(x)<br>
x = wavenet(x, dim, dilations=12, kernel_size=5)<br>
x = L.Dense(dim//4)(x)<br>
x = L.Dense(dim)(x)<br>
x = tf.reshape(x, (-1,3,4096,dim))<br>
x = tf.transpose(x,(0,2,3,1))<br>
x = L.BatchNormalization()(x)<br>
x = L.Activation('gelu')(x)</p>
<p>Model1:<br>
Efficient B3 (size=128*128), CV 87.79-88.04<br>
Model2:<br>
Wavenet + GRU, CV 87.80-88.05<br>
Model3:<br>
Wavenet, CV 87.83-88.07</p>
<p>here it is.</p>
<p>def wavenet(x, dim, n, kernel_size=3):<br>
dilation_rates = [2**i for i in range(n)]<br>
x = L.Conv1D(filters = dim, <br>
kernel_size = 1,<br>
padding = 'same')(x)<br>
res_x = x</p>
<pre><code> for dilation_rate in dilation_rates:
tanh_out = L.Conv1D(filters = dim,
kernel_size = kernel_size,
padding = 'same',
activation = 'tanh',
dilation_rate = dilation_rate)(x)
sigm_out = L.Conv1D(filters = dim,
kernel_size = kernel_size,
padding = 'same',
activation = 'sigmoid',
dilation_rate = dilation_rate)(x)
x = tanh_out*sigm_out
x = L.Conv1D(filters = dim,kernel_size = 1,padding = 'same')(x)
res_x = res_x + x
return res_x
</code></pre> |
G2Net Gravitational Wave Detection | 8th Place Solution(augmentation part) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: G2Net Gravitational Wave Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><a href="https://www.kaggle.com/richx86" target="_blank">@richx86</a> As you can see the plot of <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a>, zebra is most powerful augmentation. I think if you use zebra, the uplift of other augmentation is very small.<br>
Roughly,</p>
<ul>
<li>negative flip: 0.0003</li>
<li>LIGO swap: 0.0005</li>
<li>Zebra: 0.001</li>
<li>All in one: 0.0013~</li>
</ul>
<p><a href="https://www.kaggle.com/richx86" target="_blank">@richx86</a> if your team did not use so much augmentation, there is a lot of room to improve with this. Please try out.</p>
<p>Thank you for these plots of experiments!</p>
<p><a href="https://www.kaggle.com/richx86" target="_blank">@richx86</a> We found this <code>wave = wave1[:, 0:4096:2]</code> got 0.80 on CV. So I just tried to mixup with it.</p>
<p><a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> FYI, I used <code>target = max(target1, target2)</code></p>
<p>I'm sharing my worked augmentation part here.</p>
<ul>
<li>Zebra Mixup<br>
This augmentation worked greatly(+0.001). We called this zebra mixup.</li>
</ul>
<pre><code>wave_new[:, 0:4096:2] = wave1[:, 0:4096:2]
wave_new[:, 1:4096:2] = wave2[:, 1:4096:2]
</code></pre>
<ul>
<li>Negative Flip<br>
Worked for only negative sample</li>
</ul>
<pre><code>wave = wave[:, ::-1].copy()
</code></pre>
<ul>
<li>Swap with Other Negative<br>
Negative sample can be switched with other negative</li>
</ul>
<pre><code>if np.random.uniform() > 0.5:
wave1[0] = wave2[0]
if np.random.uniform() > 0.5:
wave1[1] = wave2[1]
if np.random.uniform() > 0.5:
wave1[2] = wave2[2]
</code></pre>
<ul>
<li>LIGO Swap<br>
Both LIGOs are completely same, so we can swap those.</li>
</ul>
<pre><code>w0 = wave1[0].copy()
wave1[0] = wave1[1]
wave1[1] = w0
</code></pre>
<p>It's hard to describe, because it depends on other setup. For instance zebra is most powerful augmentation, so it decreases improvement of negative flip.<br>
That means we have 2**4 patterns.</p> |
G2Net Gravitational Wave Detection | A technique that worked for me (~0.001-0.002 improvement) that none of the posted (so far) solution used | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: G2Net Gravitational Wave Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>That is making sure that each batch has some high and low SNR sample (I designed each batch so that 30% of each batch has low SNR). This help preventing bad gradients from overfitting the solution.</p>
<p>Since the host didn't include other parameters and SNR information, I decided to use a simple model's output as an indicator of SNR. Basically just get any model that has at least 0.87 AUC, get its prediction on training model, and then decide a cutoff confidence score and compute abs(pred-label). SNR = high if abs(pred-label) < cutoff else low (I used 0.5 as my cutoff).</p>
<p>This technique improved my 1D from 0.8738 to 0.8760 and my B0 from 0.8749 to 0.8757. Although curiously enough, it degraded my best single model B3 of 0.8783 to 0.8777. That said, I discovered this like 2 days ago so I didn't have time to tune it properly.</p>
<p>P.S. If you train your model on decreasing order of SNR, your model will overfit to oblivion within the first epoch.<br>
P.S.S. Can anyone point me toward the notebook that <a href="https://www.kaggle.com/c/g2net-gravitational-wave-detection/discussion/275316" target="_blank">https://www.kaggle.com/c/g2net-gravitational-wave-detection/discussion/275316</a> is complaining about?</p> |
G2Net Gravitational Wave Detection | LB61th PB72th solution & Please tell me your opinion! | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: G2Net Gravitational Wave Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you everyone and congratulations!</p>
<p>I thought winning silver medal, but small shake down:(<br>
I will continue to challenging other kaggle competitions from now on.</p>
<p>I don't have signal processing knowledge, so this is very interesting competition.</p>
<h1>Solution</h1>
<p>I ensembled below 21 models and weighted oof ridge coefficients.</p>
<p>Ensemble result LB:0.8807 PB:0.8783</p>
<ul>
<li>CWT(256x256) + EffNetB7 : CV:0.8750 LB:0.8785</li>
<li>CWT(384x384) + EffNetB7 : CV:0.87613 LB:0.8790</li>
<li>CWT(448x448) + EffNetB7 : CV: 0.87643 LB: 0.8789</li>
<li>CWT(384x384) + Input scaling to similar order + EffNetB7 : CV:0.87608 LB:0.8791</li>
<li>CWT(512x512) + EffNetV2M : CV:0.87614 LB:0.8788</li>
<li>CWT(576x576, wavelet_width=8) + EffNetV2M : CV:0.87650 LB:0.8792</li>
<li>CWT(586x586, wavelet_width=6) + EffNetV2M : CV:0.87647 LB:0.8791</li>
<li>CWT(420x420) + EffNetB5 : CV:0.8752 LB:0.8788</li>
<li>CWT(512x512, wavelet_width=6) + DenseNet201 : CV:0.87527 LB:0.8782</li>
<li>CWT(640x640) + EffNetB5 : CV:0.8710 LB:0.8774</li>
<li>CWT(576x576, wavelet_width=6) + EffNetV2M : CV:0.87652 LB:0.8790</li>
<li>bandpass + CWT(?x?, ) + ResNet34 : </li>
<li>bandpass + CWT(576x576, wavelet_width=6) + EffNetV2M : CV:0.8765 LB:0.8791</li>
<li>bandpass + CWT(512x512, wavelet_width=6) + EffNetB7</li>
<li>bandpass + CWT(384x384, wavelet_width=6, upper_freq=500) + VitB16</li>
<li>bandpass + CWT(448x448) + EffNetB7</li>
<li>1dCNN version0 : CV 0.86508, LB:0.8710</li>
<li>1dCNN version1 : CV:0.8667 Lb:0.8738</li>
<li>1dCNN version2 : CV:0.8651 Lb:0.8728</li>
<li>1dCNN version3 : CV:0.8663 Lb:0.8716</li>
<li>1dCNN version4 : CV:0.8669 LB:0.8759</li>
<li>1dCNN version5 : CV:0.8689 LB:0.8740</li>
</ul>
<p>I tried all models containg other model and ridge weights. In this case, some model weights are negative, so I normalize to [0, 1] forcely. (LB:0.8808 PB:0.8783)<br>
However, ensemble using only positive weights is good for PB.</p>
<p>This is power solution.</p>
<p>My trying and questions are below, so could you tell me your opinion?</p>
<h1>Q1. CWT vs CQT</h1>
<p>Firstly, I tried CQT, but CV score was not improved. Therefore, I used CWT and getting CV 0.874~.</p>
<p>Larger model and larger input size gave me CV:8.765~ LB:0.879~.</p>
<p>ex)</p>
<ul>
<li>EfficientNet b7 , 512x512 + bpf CV:0.8765 LB:0.8792</li>
<li>EfficietnNet V2M, 576x576 + bpf CV:0.8765 LB:0.8792</li>
</ul>
<p>For ensembling, I retried using CQT, but score was worse than CQT.</p>
<p>Which did you use ?<br>
Please tell me your tricks for CQT and CWT.</p>
<h1>Q2. 1d-cnn using raw signal</h1>
<p>I tried 1d-cnn and got CV:0.8669 LB:0.8759 .</p>
<p>I found larger kenrel size is better than stacking small kernel size, but CV was saturated. I tried 1d-conv transformer-encoder model, but score became worse.</p>
<p>In this competittion, tensorflow could get higher score than pytorch model.<br>
For some reason, I implemented 1d-cnn using pytorch.</p>
<p>Does anyone get high score using 1d-cnn implemented with pytorch?</p>
<h1>Q3. ViT & SWIN</h1>
<p>I used ViT and SWIN whch is lower CV but boost ensemble.</p>
<p>If you use ViT & SWIN, please tell me your cv and lb.</p>
<h1>Q4. Auto-encoder</h1>
<p>I tried Auto-encoder, and tried to use it for anomaly detection.<br>
However, this was not good.</p>
<p>Does anyone succeed in auto-encoder approach?</p>
<p>Thanks</p> |
G2Net Gravitational Wave Detection | Notes on G2 Comp | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: G2Net Gravitational Wave Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>A couple more things I tried, forgot to mention:</p>
<ul>
<li>My 1D nets were shallow as stated. Instead of .flatten(), I used GAP on the higher dimen channels</li>
<li>I did seq2seq loss. I noticed this allowed models to converge a lot quicker. Both 1D and 2D models produce reduced-sized temporal representations. For ve+ all elements = 1 and =0 for ve-. I noticed for 1D models I noticed the resulting labels identified where the GW was. For 2D models, there was no such time encoding and it looked like whatever was being produced was all close to 1 or all close to 0.</li>
<li>Training 1D model on S2S 'faux segmentation' from above</li>
<li>Stacking - the first ~15 models used the same folds. But when I attempted submission pipeline, stacking sucked and I made that post. For all future models, I generated random folds</li>
<li>Difficulty + Target Iterative Stratification: Using best model, compute residual and bucket into difficulty levels. Then multi-label stratify on that and target</li>
<li>Final experiment blend (my portion): I just use cuml sgd to select weights on all of my experiments. CV for individual models ranged from 853-876, and the best blend of these was CV=0.8798601, public=0.8818, private=0.8796.</li>
<li>I'll continue to add here as I recall things.</li>
</ul>
<p>In total, I trained 96 full-fold models for this competition. Having nice local compute helps, but having deep insight into the problem + good exploratory prowess is more important (both of which I lack). I guess predestination / luck also plays its role :-). Some of the things I had tried off the top of my head (since I've already deleted my working directory):</p>
<ul>
<li>Channel shuffle</li>
<li>Channel shifting</li>
<li>Random noise</li>
<li>1D Net, all my 1D nets were really shallow, just 4 layers w/o residual</li>
<li>2D Net, couldn't get them to converge as nicely as the rest of folks</li>
<li>Downloading actual LIGO 3 detector data for pretraining - it's very messy and has bands not represented in the synthetic data</li>
<li>GW injection using pycbc in 1d and 2d</li>
<li>Injected gw param regression as pretrain task</li>
<li>Cutout</li>
<li>Mixup</li>
<li>Trainable CQT</li>
<li>Very wide kernel sizes</li>
<li>Kernel smoothing and resume training</li>
<li>Frequency domain CQT</li>
<li>FD 1D net, 6 channels</li>
<li>Training on bandpassed first derivative data</li>
<li>Training on bandpassed integrated data</li>
<li>PCA, SVD, and FastICA transformations as channel augmentations</li>
<li>Signal flipping (only works in 1D <code>x=-x</code>. You can train a model like this then do TTA)</li>
<li>Signal compression and stretching - I only attempted this in 1D as TTA and for training</li>
<li>Random Mean and STD shift TTA</li>
<li>Whitening by creating ~6sec signals: basically I'd concat the signal with itself on the right and left, but first I'd find the location on the right that has the most similar value and slope and repeat from there. Same as the left. I thought by using longer signal, we take care of the ASD generation issue. It 'worked' but the model did not benefit</li>
<li>Whitening using auto regression residuals</li>
<li>Swin Transformer</li>
<li>Galerkin transformer in 1D time domain, bucketing signal into 8hz wide chunks</li>
<li>Probably more that I am forgetting</li>
</ul>
<p>I feel like instead of trying 903209230923 things, better to abandon quickly what doesn't work, focus on what does, and importantly, drive it to its logical conclusion. I was very early on the 1D train and was training nets with the full competition data in ram. But after I got to a decent level in 1D (at the time) I started exploring all these different avenues instead without focusing more in improving the net's architecture.</p>
<p>I'd also like to express gratitude and appreciation to my team mates. I actually ran out of steam about two weeks ago, but contributions and motivation from <a href="https://www.kaggle.com/misaki1111" target="_blank">@misaki1111</a> and <a href="https://www.kaggle.com/jpison" target="_blank">@jpison</a> helped carry me over. Thank you.</p>
<p>Nothing stellar, I believe that experiment performed 'average' compared to other models I had trained. What I recall the most about it was:</p>
<ol>
<li>FastICA complained a lot about convergence issues, so I had to really up the number of iterations and silence warnings when fitting it.</li>
<li>tSVD of course requires n_components < input_dimensions, so for that aug, one channel was lost. Rather than deleting, just choose a random (original) one to place back in.</li>
<li>PCA was fast and straightforward to implement.</li>
</ol> |
G2Net Gravitational Wave Detection | Public 69th / Private 54th solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: G2Net Gravitational Wave Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h2>Acknowledgements</h2>
<p>Thanks to Kaggle and the hosts for holding this exciting competition! I've learned many things in this competition. Also thanks to all participants especially my teammate schulta( <a href="https://www.kaggle.com/schulta" target="_blank">@schulta</a> ) and Chizuchizu( <a href="https://www.kaggle.com/chizuchizu" target="_blank">@chizuchizu</a> )</p>
<h2>Models</h2>
<p>Scores of each model were as follows;</p>
<table>
<thead>
<tr>
<th>Model</th>
<th>CV</th>
<th>LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>EfficientB7ns</td>
<td>0.87706</td>
<td>0.8798</td>
</tr>
<tr>
<td>EfficientB3ns</td>
<td>0.87267</td>
<td>0.8747</td>
</tr>
<tr>
<td>1dCNN</td>
<td>0.87281</td>
<td>0.8769</td>
</tr>
<tr>
<td>swin transformer</td>
<td>0.86977</td>
<td>0.8723</td>
</tr>
</tbody>
</table>
<p>Detail of each model was as follows;</p>
<p><strong>CQT</strong></p>
<ul>
<li>EfficientNetB7ns<ul>
<li>cpt parameter: {“sr”: 2048, “fmin”: 20, “fmax”: 500, “hop_length”: 8, “bins_per_octave”: 12, “filter_scale”: 0.7}</li>
<li>5fold validation</li>
<li>Image size: 512 x 512</li>
<li>Tripret Attention (<a href="https://arxiv.org/abs/2010.03045" target="_blank">paper</a>)</li>
<li>Augmentation: shift in x-axis direction</li></ul></li>
<li>Swin Transformer<ul>
<li>cqt parameter {“sr”: 2048, “fmin”: 20, “fmax”: 500, “hop_length”: 8, “bins_per_octave”: 12, “filter_scale”: 0.7}</li>
<li>5fold validation</li>
<li>Image size: 384 x 384</li>
<li>Augmentation: shift in x-axis direction</li></ul></li>
</ul>
<p><strong>CWT</strong></p>
<ul>
<li>EfficientNetB3ns</li>
<li>5fold validation</li>
<li>image size : 256 x 256</li>
<li>Tripret Attention </li>
<li>Augmentation: shift in x-axis direction</li>
</ul>
<p><strong>1dCNN</strong></p>
<ul>
<li>We used <a href="https://www.kaggle.com/scaomath/g2net-1d-cnn-gem-pool-pytorch-train-inference" target="_blank">Public Kernel</a> as a baseline.Thanks to Shuhao Cao( <a href="https://www.kaggle.com/scaomath" target="_blank">@scaomath</a> )</li>
<li>5fold validation</li>
<li>add one more conv layer</li>
<li>bandpass_params = dict(lf=25, hf=500)</li>
</ul>
<h2>Other things</h2>
<p><strong>Worked</strong></p>
<ul>
<li>SAM optimizer</li>
<li>Normalization (separately for each gravitational wave observatory)</li>
<li>align spectrograms from each gravitational wave interferometers on channel axis</li>
</ul>
<p><strong>Not Worked</strong></p>
<ul>
<li>Mix up</li>
<li>VQT</li>
<li>pretrain using SETI data</li>
</ul>
<p><strong>Idea</strong></p>
<ul>
<li>matched filtering</li>
<li>2 stage learning</li>
<li>denoising auto-encoder</li>
</ul>
<p>If you have a question about this solution, feel free to ask!<br>
Thank you for reading!</p> |
G2Net Gravitational Wave Detection | Top 1 solution: Deep Learning part | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: G2Net Gravitational Wave Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Unfortunately no, I have not uploaded this one to github</p>
<blockquote>
<p>what if we use it on NLP?</p>
</blockquote>
<p>ufortunately last time I seriously worked with NLP data was when GloVe and word2vec + LSTM were state of the art approaches 😄</p>
<blockquote>
<p>How would the 2d decoder image look like?</p>
</blockquote>
<p>Have not done any experiments with UNet here though I doubt that it would bring any improvements to the score. The limit here is how well Conv1D can handle signals with low SNR. <br>
While playing with synthetic data we saw a lot of examples when model could properly predict <code>1.2* signal + noise</code> as 1. with high probability and the same <code>singal + noise</code> with close to 0 probability. There was a clear threshold (SNR wall) where model could not differentiate GW from noise.</p>
<p>writing a good paper would take more time than the challenge itself😧</p>
<p>Surprisingly random is working just fine, just hoped that model learned the right represenation needed for classification :)<br>
If we add segmentation mask as an additional target then it learns sorting implicitly. For example you can see the effect of notch filter at 300Hz on the segmentation mask in the main post. <br>
Also we removed low freq signal part from the masks because signals are attenuated at low frequencies due to filtering. <br>
So in general model learns to detect something that is similar to GW, visualisation of logits clear shows that. Unfortunately it doesn't matter for the CV/LB scores. <br>
<img src="https://i.imgur.com/IP9Qspo.png" alt=""><br>
Spectrograms of GW look similar to the masks just with low frquency part.</p>
<p>That would be much harder without big ensembles, on the other hand that would push us to experiment more with Conv1D architecture and augmentations. </p>
<p><a href="https://www.kaggle.com/titericz" target="_blank">@titericz</a> <a href="https://www.kaggle.com/c/g2net-gravitational-wave-detection/discussion/275507" target="_blank">https://www.kaggle.com/c/g2net-gravitational-wave-detection/discussion/275507</a></p>
<p>Yes it is quite tricky to make it work. <a href="https://www.kaggle.com/denisbsu" target="_blank">@denisbsu</a> is writing a separate post on that right now, will be ready soon </p>
<p>We decided to make two posts to make them more or less focused and concise. <br>
DSP part <a href="https://www.kaggle.com/c/g2net-gravitational-wave-detection/discussion/275507" target="_blank">https://www.kaggle.com/c/g2net-gravitational-wave-detection/discussion/275507</a></p>
<h3>Baseline solution, simple Conv1D (0.88 public LB)</h3>
<p>At the very beginning we tried different inputs:</p>
<ol>
<li>CQT - could not make it past 0.87</li>
<li>Spectrograms - with nnAudio, a bit better</li>
<li>raw signal - much better </li>
</ol>
<p>After applying highpass filter at 20Hz 1D conv on stacked channels input quickly produced good val scores. Also it was much easier to make experiments as training on a single GPU took less than 1 hour. The model was very simple, just a bunch of conv-bn-silu blocks and maxpools. For convolutions we used kernel sizes (by block) 64- > 32 -> 16 -> 8. </p>
<p>At that stage for augmentations we used:</p>
<p>channel shuffle (only Hanford, Livingston)<br>
minor time shifts between channels, 5ms</p>
<p>This approach gave us 0.877 public LB score with a single 5fold model.</p>
<p><strong>SGD is better than AdamW for Conv1D without synthetic data</strong></p>
<p>Even though model capacity was quite low, the model quickly overfitted, usually after 20 epochs, which took around 30 mins. <br>
Switching to SGD with weight decay and nesterov momentum improved LB score to <strong>0.880</strong>. Other Optimizers like AdamW with whatever weight decay, MadGrad gave lower quality.</p>
<h3>Improved Conv1D model (0.883 public LB)</h3>
<p>At that time the generated synthetic dataset was not good enough and we felt that 1D model could be improved. <br>
Following the Inception V3 approach we added different kernel sizes in each conv block, the starting block was with 32, 64, 128 kernel sizes. <br>
This gave us a boost from 0.88 to 0.881 on the LB.<br>
Next step was to try even more kernel sizes as it also made sense from DSP theory. Inception like block with 5 different kernels (16, 32, 64, 128, 256) allowed us to get 0.8823 single model LB score. <br>
A small ensemble improved LB score to <strong>0.883</strong>. <br>
Adding more kernel sizes did not improve CV/LB scores.</p>
<pre><code># Main building block for Conv1D models
class ConcatBlockConv5(nn.Module):
def __init__(self, in_ch, out_ch, k, act=nn.SiLU):
super().__init__()
self.c1 = conv_bn_silu_block(in_ch, out_ch, k, act)
self.c2 = conv_bn_silu_block(in_ch, out_ch, k * 2, act)
self.c3 = conv_bn_silu_block(in_ch, out_ch, k // 2, act)
self.c4 = conv_bn_silu_block(in_ch, out_ch, k // 4, act)
self.c5 = conv_bn_silu_block(in_ch, out_ch, k * 4, act)
self.c6 = conv_bn_silu_block(in_ch * 5 + in_ch, out_ch, 1, act)
def forward(self, x):
x = torch.cat([self.c1(x), self.c2(x), self.c3(x), self.c4(x), self.c5(x), x], dim=1)
x = self.c6(x)
return x
</code></pre>
<p>Hyperparameters</p>
<ul>
<li>optimizer: SGD, wd=1e-4, nesterov momentum</li>
<li>learning rate: 0.1 with cosine annealing</li>
<li>batch size: 128</li>
<li>epochs: 40 </li>
<li>input: 3 channels of raw signal filtered with butterworth filter at 20hz</li>
<li>augmentations: freq masking, time masking, small shifts, channel shuffle</li>
</ul>
<h3>Using synthetic dataset (0.886 public LB)</h3>
<p>As soon as Denis found a more or less good approach to signal/noise generation we started experimenting with additional data. <br>
Overall we had 2 million noise samples and 1 million pure signal samples. During training positive sample = random noise sample + random signal sample.</p>
<p>From these experiments </p>
<ul>
<li>mixing synthetic data with the train dataset did not work </li>
<li>augmentations are actually harmful in this case</li>
<li>pretraining on synthetic data and fine tuning on the train set works great</li>
</ul>
<p>During pretraining stage for simplicity we used the same amount of samples in epoch as in the train set.<br>
Pre-training around 100 epochs and fine tuning 5 folds on the train set gave 8836 on the public LB for the single Conv1d model. <br>
As detectors, especially Virgo, have different noise distribution it makes sense to use a separate conv1d encoder for each channel. Split encoders and a linear classifier on top of concatenated features boosted the LB score to <strong>0.8842</strong>.</p>
<p>It is clear that a fully connected layer is not the best fusion approach for the model with separate encoders for each channel. <br>
That’s where resnet34 came into play and surprisingly it worked better than other 2D models. We also predicted signal parameters during pretraining (SNR, chirp mass, Q) which also brought minor improvements. <br>
Pretraining 200 epochs and fine tuning 5 folds just 1 epoch gives <strong>0.8858</strong> public LB score.<br>
Augmentations during finetuning or pretraining negatively affected CV, so the best models are without any augmentations and trained with AdamW optimizer.<br>
<img src="https://i.imgur.com/t1X1mL4.png" alt="model"></p>
<p>Input to Resnet34 looked the following way (Handford band)</p>
<p><img src="https://i.imgur.com/iMc4Yrp.jpeg" alt="1D features"></p>
<h3>Segmentation</h3>
<p>Binary segmentation using output of Conv1D predicted good masks for strong signals but did not improve recall on weak signals. In general it could be a useful tool to analyse the data, but we did not get any boost on the LB from that.<br>
<img src="https://i.imgur.com/zLPtw6S.png" alt="good signal segmentation"></p>
<h3>Things that did not work</h3>
<p>There were much more experiments that I won't describe (including different frontends, training approaches etc.), but most noticeable are:</p>
<p><strong>Denoising autoencoder</strong></p>
<p>I trained different variants of autoencoders to separate noise and signals which worked great for strong signals but produced poor results on medium to low amplitude signals.</p>
<p><strong>OHEM collapse and reverse labels mystery</strong></p>
<p>I tried different versions of hard example mining to improve model performance on hard samples but usually the model collapsed and started predicting the same probability for all samples. </p>
<p>Which led to an interesting experiment:</p>
<ul>
<li>from full OOF predictions select positive samples with low probability and negative with high probability</li>
<li>train on this subset but validate on proper split</li>
<li>evaluate using predicted probabilities</li>
<li>evaluate using reversed predctions (1 - p) </li>
</ul>
<p><img src="https://i.imgur.com/ZbnpqeV.png" alt=""></p>
<p>That result was really confusing and at first we thought that the dataset was mislabeled. Later even with generated synthetic data we had the same problem. <br>
It is clear that because of the SNR wall some positive samples can be considered as just noise, but how the model generalized to predict signal from noise samples, that’s what we could not find. </p>
<p>We also trained 20 folds on the same model - it gave 0.8862, but basically anything above is exponentially hard, at that point all models make the same mistakes and ensemble brings very minor boost. <br>
They key was not resnet34 but the features from Conv1d.</p> |
Galaxy Zoo - The Galaxy Challenge | Code and documentation for the winning solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Galaxy Zoo - The Galaxy Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I've made the code for my solution available on GitHub under a BSD 3-clause licence: https://github.com/benanne/kaggle-galaxies</p>
<p>Instructions on how to train the models are in the README file. Extended documentation is also available: https://github.com/benanne/kaggle-galaxies/blob/master/doc/documentation.pdf</p>
<p>Here's a short post about the code being published on my blog: http://benanne.github.io/2014/04/16/galaxy-zoo-code.html</p>
<p>For those who missed it, I previously explained my solution in detail on my blog as well: http://benanne.github.io/2014/04/05/galaxy-zoo.html</p>
<p>If you’ve found the code useful, dropping me a line to let me know how you used it is appreciated :)</p> |
Gendered Pronoun Resolution | 11th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Gendered Pronoun Resolution
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>There are two main aspects:
1. Use BN to avoid some fold non-convergence problems.
2. Because the over-fitting is very serious, dropout can be added directly after bert-feature. And it can add up a lot like <code>Dropout(0.9)</code></p>
<p>Thank you very much for your congratulations. In fact, up to now I still don't know how to face you and Giba ... So this message really means a lot to me. One of the reasons I chose to stay in Kaggle was to hope that one day I could make up for what I had done before. Now it seems that things are really heading in the right direction. Thanks ; )</p>
<p>Thanks~</p>
<p>It's really a pity. I think you just lack a little bit of luck. All zero problems sometimes make us look less motivated. Nevertheless, your contribution to the forum is obvious to all. Looking forward to your performance in Jigsaw. I hope we can find the right way of find-tuning there A-A. </p>
<h2>Base model</h2>
<p>I have 2 base model all coming from public kernel. </p>
<ul>
<li><a href="https://www.kaggle.com/mateiionita/taming-the-bert-a-baseline">Taming the BERT</a> -> <code>Model 1</code></li>
<li><a href="https://www.kaggle.com/chanhu/bert-score-layer-lb-0-475">Bert + Score Layer </a> -> <code>Model 2</code></li>
</ul>
<p>Thanks for their selfless sharing.</p>
<h2>Modification</h2>
<ol>
<li>Using large-bert instead of base-bert(~0.04).</li>
<li>Concat last 8 layers instead of last layer(~0.1). Here I made some attempts. If I increase the number of layers after the 8th floor, it will not be much better.</li>
<li>Concat uncased-bert and cased-bert(~0.04). In fact, they can also be used separately in different models and then merged, but doing so will increase the number of models, I don't have time to adjust them one by one, so I used them directly together. If you want to go deeper on ensemble, use uncased model and cased model separately to provide better diversity. Or instead of using the concat method but using the averaging method. But based on my experience in <code>Quora</code> and <code>Jigsaw</code>, this method usually performs slightly worse, at least on local-cv.</li>
<li>Using 10fold-cv instead of 5fold-cv(~0.015). The promotion is much bigger than I expected. Maybe it’s because the training data is too small, I guess A-A.</li>
<li>Adding <code>A*P, B*P</code> (~0.05), only for <code>Model 1</code>. I tried some combinations like <code>A + B</code>, <code>A - B</code>, etc. In the end, only this set of features has an effect. This finally stabilized the gap between <code>Model 1</code> and <code>Model 2</code> at 0.02.</li>
<li>Tuning mlp and ffnn(~0.01). In fact, this step is very important, and it is executed earlier than other steps. It ensures that each fold can converge normally and avoid over-fitting problems caused by excessive use of bert. </li>
</ol>
<h2>Ensemble</h2>
<p>I chose four models for ensemble: <code>Model 1</code> * 1, <code>Model 2</code> * 3(Architecture is slightly different). These four models are handed over to lgb and lasso for ensemble. The final result is as follows:
- LGB - CV 0.3059 - LB 0.2047
- Lasso - CV 0.3074 - LB 0.2336</p>
<h2>What didn't work</h2>
<ul>
<li>Fine-tuning. Until now, I don’t know the correct posture of fine-tuning, it’s very awkward.</li>
<li>Correct the wrong label. </li>
<li>Elmo.</li>
<li>Siamese network. Later I found out that it is helpful to solve this problem, but it requires some skill when using it.</li>
</ul>
<h2>Written at the end</h2>
<ul>
<li>Thanks to Google AI and Kaggle for organizing this research competition giving everyone the opportunity to try the LM. During Quora, I often wondered how good it would be if I could use bert. The only pity is that I had a lot of time at the time but the rules didn't allow it, but this time I didn't have a lot of time to try LM.</li>
<li>I think augmentation is also important in addition to fine tuning. I am even curious as to whether the promotion brought by fine tuning will actually be larger than some of the structures or features specifically designed for this problem.</li>
<li>I think the name of my team is really good, isn't it?</li>
</ul>
<p>See you in Jigsaw. Have a good time.</p> |
Gendered Pronoun Resolution | 12 place writeup | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Gendered Pronoun Resolution
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi Hai,</p>
<blockquote>
<p>One thing we didn't have time to try is that to fine tune BERT to get GAP-oriented attention scores.</p>
</blockquote>
<p>We also don't used fine tuning, but I really believe that our model is still pretty far from what can be achieved without it:</p>
<p>reserves that we do not used:
- Test time aug (even extremely basic implementation with one extra option immediately gives 0.20316)
- Accumulating scores against all mentions of name instead of nearest one
- Extra features like page name, which may become significant when attention based methods does not give a clear winner.
- Use more then one pre trained Bert
- e.t.c</p>
<p>Any case it was a great competition and a lot of fun.
Regards,
Pavel</p>
<p>Our approach was based on extracting co-attention scores from pretrained bert model. </p>
<p>So this are selected lines for our preprocessing pipeline:</p>
<p>```yaml
preprocess:
- try_search_near: # try to search for a mention of the names that is located closer to pronoun in question
- replace_unknowns_preprocessor: #replace names that are not the part or pipeline
gender: true #try to select name of appropriate gender when gender statistics is known
cased: false # use lower case
lastNames: true # try to replace last names to well known lastnames
- bert_encode: #extract attention scores from bert
maxToken: true #if named entity spans for several tokens take token with maximum attention score
- disk-cache:
split: true <br>
- extract_token_bert_scores_3class: <br>
startLayer: 9 #start from 9 layer of birt (was selected based on cv)</p>
<p>```
Then after this pipeline we have applied two models:</p>
<p>1) NN:
<code>
net:
- flatten
- dropout: 0.5
- dense: [144, relu]
- dense: [3, softmax]
</code>
2) LGBM - with hyperparameter search</p>
<p>in bose cases we have used startified 10 fold, CV and blended models accross folds.</p>
<p><strong>Big Error:</strong> We forgot to use test augmenation in our final predictions(you may replace names to a different one that are known to BERT and average predictions) , this costed us gold medal.</p>
<p>Regards,
Pavel</p> |
Gendered Pronoun Resolution | 15th place solution [0.21074] | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Gendered Pronoun Resolution
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>My approach was quite straightforward for this competition I have used embeddings, extracted by BERT + some other features and fully connected NN to get the final predictions.</p>
<p><strong>Embeddings:</strong>
I have used Bert large uncased (without any finetuning) and extracted embeddings from 9 layers – from -1 to -9. I have concatenated all of them and used as a 1024*9 = 9216-dimension vectors as final embeddings. Other sets of layers gave me worse results.</p>
<p><strong>Additional features:</strong>
- Pronoun offset
- Pronoun token offset
- A and B tokens offsets
- A and B lengths
- Distance between a/b and pronoun (in symbols and tokens)
- Some features from this kernel <a href="https://www.kaggle.com/pheell/look-ma-no-embeddings">https://www.kaggle.com/pheell/look-ma-no-embeddings</a></p>
<p><strong>Additional data:</strong>
- Manually corrected labels from here <a href="https://www.kaggle.com/c/gendered-pronoun-resolution/discussion/81331">https://www.kaggle.com/c/gendered-pronoun-resolution/discussion/81331</a></p>
<p><strong>Model:</strong>
1. All other (non-bert) features were passed through the linear layer:
Other_features (separate A+P and B+P)-> bn -> linear -> relu -> preprocessed other features
2. Concatenated other and bert features were passed through the FCNN:
(A/B BERT emb + preprocessed other features) -> linear(1800) -> relu -> dropout -> bn -> linear(500) -> relu -> dropout -> linear(1) -> A/B pred
3. The final prediction = softmax(A pred, B pred, 0)</p>
<p><strong>Training:</strong>
- A have trained the model with Adam. Everything was highly regularized (~90% dropout, batch norm, weight decay ~6)
- The model was trained on all GAP data using 10-fold cv. The final prediction was just a blend of 10 predictions.</p> |
Gendered Pronoun Resolution | 1st place solution summary | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Gendered Pronoun Resolution
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><a href="/rohitmidha23">@rohitmidha23</a> <a href="/hideodeo">@hideodeo</a> <a href="/sanikamal">@sanikamal</a> <a href="/kenkrige">@kenkrige</a> I've updated the post with links to the paper and the code. Hope you find them useful!</p>
<p><a href="/kenkrige">@kenkrige</a> Thanks for sharing your code.</p>
<p>You are correct, knowing the data helps understand the context. I was running out of page limit, so didn't expand much on it and left it to the references and appendix. I tried my best to put down the ideas through writeup and figures, but I'm sure there is enough room for improvement.</p>
<p><a href="/sajidhussainml">@sajidhussainml</a> sorry, I don't know how to answer your question.</p>
<p><a href="/aoimakino">@aoimakino</a> thanks.</p>
<p><a href="/prashanththangavel">@prashanththangavel</a> thanks.</p>
<p><a href="/cpmpml">@cpmpml</a> thanks.</p>
<p><a href="/ehpyeon">@ehpyeon</a> thanks.</p>
<p><a href="/pinkkyone">@pinkkyone</a> thanks.</p>
<p><a href="/amitkumarjaiswal">@amitkumarjaiswal</a> thanks.</p>
<p><a href="/pheell">@pheell</a> thanks,</p>
<p><a href="/ceshine">@ceshine</a> thanks.</p>
<p><a href="/kenkrige">@kenkrige</a> not very active on kaggle )</p>
<p><a href="/kenkrige">@kenkrige</a> I'm in the process of writing the paper, will share the link soon.</p>
<p>Meanwhile, here are some references-</p>
<p><a href="http://aclweb.org/anthology/D18-1456">A Nil-Aware Answer Extraction Framework for Question Answering</a>
<a href="https://arxiv.org/abs/1901.00603">Coarse-grain Fine-grain Coattention Network for Multi-evidence Question Answering</a>
<a href="https://arxiv.org/pdf/1602.07019.pdf">Sentence Similarity Learning by Lexical Decomposition and Composition</a></p>
<p>Great competition and thanks to all the participants for keeping the competition alive!</p>
<p>I was more active initially but got sidetracked by things at work and international travel during the later half.</p>
<p>Since I'm traveling, I'll be able to post only a brief summary now but hopefully update it with more details later.</p>
<h1>Summary</h1>
<p>The focus of my work was on 1 architectural contribution and a lot of generalization analysis.</p>
<p>Here are the key ingredients-</p>
<ol>
<li><p>Data</p>
<ul><li>The automated pipeline is illustrated below.
<img src="https://i.imgur.com/4nBm6AH.png" alt="Data pipeline"></li>
<li>Data augmentation for neither instances - the under-represented category.</li></ul></li>
<li><p>Architecture</p>
<ul><li>Bert finetuning - no hyper parameter tuning was done.</li>
<li><strong>Evidence gathering by Distant Supervision</strong> - various attention-pooling, self-attention, co-attention and evidence decomposition layers to incorporate noisy information from heuristic and pre-trained coref models. Lee et al is a very powerful coref model and grossly undermined by the GAP paper. Here's <a href="https://www.kaggle.com/sattree/3-a-better-baseline">https://www.kaggle.com/sattree/3-a-better-baseline</a> a kernel that compares various coref models. It established a better 'unsupervised' baseline than the official bert baseline of .62. Unfortunately, I couldn't make it work on kaggle kernels and by the time I gave up (and subsequently got sidetracked) a better bert baseline of .53 had been established.</li>
<li>Label smoothing - to handle noisy labels.</li></ul></li>
<li><p>Unbiased model averaging</p>
<ul><li>Averaging across folds</li>
<li>Averaging across 5 seeds</li>
<li>Averaging across LMs (cased and uncased) - cased models are more representative and help identify named entities but at the same time rare entities often have sparse distributional properties. On the other hand, uncased models are quite noisy.</li>
<li>Averaging Sanitized and Unsanitized label models - since the labels are crowd sourced, it is reasonable to assume that there will be some pattern to the errors humans make and will translate to unseen stage 2 data as well (a common challenge in language datasets). At the same time, some of those errors will be just due to the noise in the behavior and enthusiasm of the turkers.</li></ul>
<p>The intractability of hyper parameter tuning for deep learning models and risk of underfitting/overfitting to the validation set (early stopping) make averaging more useful than usual. However, this effect can also be achieved through SWA by investing some research time but at a substantially reduced computational budget.</p></li>
</ol>
<p>Those are amongst the major contributors that I can think of right now!</p>
<h2>UPDATE</h2>
<p>The above description is for submission model 2.
Model 1 has all the same elements other than the last bullet point in model averaging. Model 1 was trained only on the sanitized version of the labels, in case the organizers end up putting in some extra effort of cleaning the crowd sourced labels.</p>
<h3>Architecture details</h3>
<p>Based on the cluster predictions (clusters co-referent with A, B and P) from the coref models, a mask corresponding to each cluster is created from word-piece tokens and applied to BERT embeddings (from the last layer). This masked version of embeddings for each such cluster is then fed to the <em>Evidence Gathering</em> module which systematically aggregates the information and comes up with an <em>evidence vector</em>. The evidence vector is then fed to the decision making layer resulting in a probability distribution over A, B and P. </p>
<p>The <em>Evidence Gathering</em> module sits on top of the BERT pre-trained models, but the BERT layers themselves are not frozen and get weakly adapted through error propagation, while, retaining most of the language characteristics necessary for generalization. </p>
<h2>UPDATE - 06/04/19</h2>
<p>arxiv paper link - <a href="https://arxiv.org/pdf/1906.00839.pdf">Gendered Ambiguous Pronouns Shared Task: Boosting Model Confidence
by Evidence Pooling</a>
github code link - <a href="https://github.com/sattree/gap">https://github.com/sattree/gap</a></p>
<p><img src="https://i.imgur.com/DPEGIA6.png" alt="Architecture"></p>
<p><a href="/stanislavblinov">@stanislavblinov</a> thanks )</p>
<p><a href="/cczaixian">@cczaixian</a> thanks )</p>
<p><a href="/taniaj">@taniaj</a> thanks )</p>
<p><a href="/akashravichandran">@akashravichandran</a> thanks )</p>
<p><a href="/dhaqui">@dhaqui</a> thanks )</p>
<p><a href="/sudalairajkumar">@sudalairajkumar</a> thanks )</p>
<p><a href="/mateiionita">@mateiionita</a> glad you found it useful!</p>
<p>Given the current status of (nlp) deep learning models, I prefer to leverage the existing architectures and build on top of them. There is sufficient evidence that these models capture the necessary linguistic structure and, thereby obviating the need to go through the painstaking process of creating hand-crafted domain specific features. Since, proref problem is a subset of the general coref problem, I found it reasonable to leverage the structure unravelled by them and it ended up working well for the problem.</p>
<p><a href="/taniaj">@taniaj</a> glad you asked!</p>
<p>There are two aspects to this-</p>
<ol>
<li><p>I generated neither instances by using the cluster predictions (albeit noisy) from Lee et al and looking for non-overlapping clusters of a pronoun and a mention. However, this approach was only meant to better calibrate the 'neither' probabilities and led to an overall improvement of 1%. Since, there aren't any architectural elements, it still doesn't help the model in understanding neither scenarios. A less laborious and smarter way to achieve the same effect, would be through <em>instance weighting</em>, although, it will require some investigation and runs the risk of overfitting on the validation set (tuning the weights as hyper parameters).</p></li>
<li><p>The main contribution comes from <em>evidence gathering</em> module (described above) which allows the model to understand the linguistic structure leading to neither scenarios. The evidence vector encodes information about which mentions a given pronoun is co-referent with and vice-versa.</p></li>
</ol>
<p>Hope the explanation helps!</p> |
Gendered Pronoun Resolution | 22nd place solution and some afterthoughts | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Gendered Pronoun Resolution
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h2>1. Our final solution</h2>
<p>Our approach was similar to those already described.
In a nutshell, the pipeline includes:
- BERT embeddings
- hand-crafted features
- several MLPs</p>
<h3>1.1. BERT embeddings</h3>
<p>We concatenated embeddings for A, B, and Pronoun taken from Cased and Uncased large BERT models - 3 layers (-4, -5, -6 turned out to work best). For each BERT model this yield 9216-dimensional output: 3 (layers) x 3 (entity) x 1024 (BERT embedding size). </p>
<h3>1.2 Hand-crafted features</h3>
<p>We ended up with 69 features of different nature (<strong>mistake:</strong> turns out we should've put more effort on BERT finetuning):
- Neuralcoref, Stanford NLP and e2e-coref model predictions
- Predictions of MLP trained with ELMo embeddings
- Syntactic roles of A, B, and Pronoun (subject, direct object, attribute etc)
- Positional and frequency-based (distancies between A, B, Pronoun and derivations, whether they all are in the same sentence or Pronoun is in the following one etc.)
- Dependency tree-based (from <a href="https://www.kaggle.com/negedng/extracting-features-from-spacy-dependency">this</a> Kernel by <a href="/negedng">@negedng</a> )
- Named entities predicted for A and B
- GAP heuristics (from <a href="https://www.kaggle.com/sattree/2-reproducing-gap-results">this</a> Kernel by <a href="/sattree">@sattree</a> )</p>
<h3>1.3. Models</h3>
<p>The final combined model was a dense classification head built upon output from 5 other models:
- Two MLPs (like in Matei's <a href="https://www.kaggle.com/mateiionita/taming-the-bert-a-baseline">kernel</a>) - separate for Cased and Uncased BERTs, both taking 9216-d input and outputing 112-d vectors
- Two Siamese models with distances between Pronoun and A-embeddings, Pronoun and B-embeddings as inputs and shared weights
- One more MLP taking 69-d feature vectors as an input </p>
<p>Final predictions were clipped with 0.01 threshold (turns out it's better without clipping).</p>
<h2>2. What didn't work for us</h2>
<h3>2.1. Augmentation</h3>
<p>If A is the right reference, then substitute B with all other named entities in the sentence.</p>
<p><em>Example: [A] John entered the room and saw [B] Mary. [P] She looked so perfect laughing with Jerry and Noah. Btw, Jerry and Noah are Clara's and Julia's best friends".</em></p>
<p>True label is B ('She' refers to Mary). I used to augment with 'She', 'Mary' and all other noun phrases in the sentence (according to Spacy POS tagging):</p>
<p>A | B | Pronoun
Jerry - Mary - She (B is true)
Noah - Mary - She (B is true)
Clara - Mary - She (B is true)
Julia - Mary - She (B is true)</p>
<p>Thus the dataset was increased 9x but new instances were much simpler to classify than original ones (~0.15 CV loss, 0.6 test loss). We noticed that the model trained on augmented data tended to make more confident predictions, hence many good answers but some big misses as well. Decided to use it as one more input for stacking.</p>
<h3>2.2. Stacking</h3>
<p>Though in the beginning blending helped a lot, when CV loss was ~0.33, for some reason we were not able to have any profit from stacking. We applied it mostly to OOFs built with augmentation (training folds being augmented, validation one - not), maybe we did smth wrong.</p>
<h3>2.3. BERT finetuning</h3>
<p>Training loss would quickly drop to zero, but test loss would be ~0.6. Now we realize that we should've made more attempts. </p>
<h2>3. Afterthoughts</h2>
<ul>
<li>Even though it's very disappointing to miss the golden zone, still these emotions will fade away and the competition is the best one (that I personally took part in), that's why <a href="https://www.kaggle.com/c/gendered-pronoun-resolution/discussion/87660#latest-507277">thanks</a> as early as 3 weeks before the end. Many thanks to Google AI and Kaggle for organizing this research competition! Thanks also to <a href="/juliaelliott">@juliaelliott</a> and <a href="/wcukierski">@wcukierski</a> for handling all our questions. Lastly, thanks to my teammates - without <a href="/mateiionita">@mateiionita</a> the whole competition would probably never be the same, but others <a href="/vlarine">@vlarine</a> <a href="/dennislo">@dennislo</a> and <a href="/atanasova">@atanasova</a> also worked hard</li>
<li>Blind LB played a trick wit us. Have to admit we were a bit over-confident about our running relative position :) </li>
<li>Some takeaways for us are prototyping all components (including stacking) on the early stage, and putting more efforts on "killer features", also finetuning BERT with larger pronoun-related datasets (instead of overfitting to the small GAP dataset) looks like a good idea to reuse. </li>
</ul>
<p><a href="https://github.com/vlarine/kaggle/tree/master/gendered-pronoun-resolution">Our code</a> (fully reproducible). </p> |
Gendered Pronoun Resolution | 33rd place simple solution [LB 0.26663] | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Gendered Pronoun Resolution
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks everyone for the competition! This is my first NLP kaggle, so I'm quite happy with the results. Thanks my teammates, ods.ai and kaggle community for great ideas.
Now I would like to tell you about our solution.</p>
<h1>model</h1>
<p>As a baseline we took this <a href="https://www.kaggle.com/gdoteof/pytorch-bert-baseline-wd-epochs-cnn-lstm">kernel</a>, so text tokenization and model structure was similar, but we changed <strong>Head</strong> structure of our model. First thought was to make it more "smooth". Linear layers were 1024 * 3 -> 1024 -> ... -> 1024 -> 3 for BERT-large-cased. And we made it 1024 * 3 -> 1024 -> 512 -> 64 -> 3. And for BERT-base-cased 768 * 3 -> 768 -> 768 // 2 -> 768 // 8 -> 3. We also put Dropout(0.5) before each linear layer (and bn, relu, of course).
In both cases we didn't use the last BERT output layer. For base it was -2 and for large it was -4.</p>
<h1>training process</h1>
<p>One important step: we used cleared data, from <a href="https://www.kaggle.com/c/gendered-pronoun-resolution/discussion/81331#503094">here</a>, because some samples were labeled wrongly and it could corrupted results.
We used 6 folds. On each model was trained with BertAdam optimizer(lr = 0.0001, weight_decay = 0.01) from <a href="https://github.com/huggingface/pytorch-pretrained-BERT">pytorch-BERT repo</a>. And ReduceLROnPlateau scheduler with patience = 5 and alpha = 2. When we had no score improvement in 15 epochs we stopped training. We also tried CosineAnnealingLR after with RMSprop, but it didn't improved the score.
CV is great for stacking, so we decided to build a second layer model, which was LGBM. We added both BERT-large and BERT-base + some features.</p>
<h1>features</h1>
<p>For features we used distance and url (as in this <a href="https://www.kaggle.com/chanhu/bert-score-layer-lb-0-475">kernel</a>) and some statistic features, like 'num words', 'num unique words', 'num chars', 'num stopwords', 'num words upper' and encoded pronoun (just by number, ex. 'She': 1, 'he': 2, etc.). And that's it!</p>
<h1>LGBM</h1>
<p>For parameters search i used a powerful <a href="https://github.com/hyperopt/hyperopt">hyperopt</a> framework. This is very useful and effective for randomized search.</p>
<h1>fine-tuning BERT</h1>
<p>We also tried BERT fine-tuning, like in examples from this <a href="https://github.com/huggingface/pytorch-pretrained-BERT/tree/master/examples/lm_finetuning">example </a>, but it didn't seem to work, the score was worse. So if you succeeded in BERT fine-tuning, please tell in comments how you did it.</p>
<p>Thanks for reading! 🥈 </p>
<p>Lol, simple models without features give 0.20819. Thanks for sharing a very elegant solution!</p> |
Gendered Pronoun Resolution | 3rd place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Gendered Pronoun Resolution
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Glad that can contribute smth to your learning, as I learned a lot from your kernel and ideas 👍 </p>
<p>Sorry for the confusion: I first fine-tuned the head with frozen BERT on the external dataset, then fine-tuned the embeddings with frozen head on the same external dataset, and afterwards trained only head on the GAP.
Btw, without training on GAP at all, after all external training I could get the 0.36 on the 'clean' test stage1 data - don't remember the score on the real 'dirty' data, but should be around 0.44 or smth like this. </p>
<p>the best explanation why clipping is helpful was <a href="https://www.kaggle.com/c/statoil-iceberg-classifier-challenge/discussion/48241#latest-306293">here</a>. I think that extremely good models (like 1st place) can get without it, but for majority of the models it helpful. I assumed that the test data stage2 ground truth will be noisy and with human errors, so the models will do the mistakes inevitably. Clipping reduce the effect of those mistakes.</p>
<p>Things that didn't work:
- Preco dataset was too noisy and I couldn't make sense out of it
- convolutional layer with kernel larger than 1 - still struggle to understand how and why. The intuition was that more context for the span would allow for richer representation, but somehow it didn't work
- finetuning BERT directly on the GAP data
- parallelism features from the original GAP paper
- Elmo-style scalar mixing
- swapping A and B
- using the scraped information from wikipedia</p>
<p>Many thanks to google and to kaggle for the organization of the competition, and to the awesome community here - <a href="/kashnitsky">@kashnitsky</a>, <a href="/mateiionita">@mateiionita</a>, <a href="/ceshine">@ceshine</a>, <a href="/kenkrige">@kenkrige</a> and all others.</p>
<p>The kernel by Ceshine Lee was a very good starting point - I played with it, using bert-large-cased, added 1d-conv layer with kernel 1, spanextractor, and the best result was around 0.48 on the test_stage1 data. As others, later I discovered that intermediate layers of BERT encoder contain better representation for the model, for my architecture in particular [-5] and [-6]. This was bringing ~0.41 depending on the seed. Afterwards I started adding the features, like syntactic distance between mentions, token distance and so on. It helped a bit, but not much. I was stuck, and at this point finally the guys from ontonotes replied to me, so I started using the external data.</p>
<h3>External data</h3>
<p>Initially I though that the data will decide everything in this compeition, so while the models were running I put some efforts putting winobias, winogender, dpr and later ontonotes into gap-format. 1st place solution shows that more data is not always better performance =) Anyway I had the data and wasn't quite sure what to do with them.</p>
<p>At first I simply put all external datasets into one big dataset, split it to train and validation and tried to finetune both the bert and the head on it. The idea was to use the learned weights as initalization weights for the 'normal' gap training, where normal is using ~2400 gap datapoints with 5 fold CV. It didn't work, score became worse. Then I only trained a head for a 2 epochs without tuning the bert and it helped to come from ~0.4 to ~0.36. So I had the observation that better initialization of the head weights lead to better minima. I decided to work a bit on this idea. I started to use winobias as validation data, thus minimizing the gender bias. This brought somewhere around 0.35. Then (it was 2 days before the end of the competition) I tried training the head, then freeze the head and fine-tune the bert weights (only one layer either [-5] or [-6]) with very small learning rate. It had some positive effect and put me around 0.33-34. </p>
<p>Afterwards I had no time for interesting stuff anymore, so I started with ensembling - I run 4 models overall, bert-large-cased with [-5] layer, bert-large-uncased[-5], bert-large-cased[-6] and bert-large-cased[-6]. The final prediction is just the average of all models with clipping at 1e-2. </p>
<h3>Final Architecture</h3>
<p>Here is the final overview of the architecture I used:
BERT -> Conv1d with kernel=1 (1024x64) -> SelfAttentiveSpanExtractor(64) * 3 -> BatchNorm1d() -> fully connected (64*3 x 64) -> dropout (0.6) -> fully connected (64 x 81) (here all the manual features come in) -> fully connected (81 x 3). </p> |
Gendered Pronoun Resolution | 4th place solution overview | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Gendered Pronoun Resolution
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I also would like to thank you for the selfless contributions. They are really impressive.</p>
<p>Thanks. I think so, because the common practice based on concatenation was too shallow to propagate the information across inputs efficiently.</p>
<p>Thanks. I will update my code this Saturday.</p>
<p>Congratulation to all the winners and thanks to Google AI for hosting the competition. I have learned a lot through the awesome Kaggle community.</p>
<p>Here is a brief overview of my solution. Since I joined this competition at a late stage, my solution does not involve BERT fine-tuning or data augmentation. What I have done was based on three observations.</p>
<h3>1. Input Dropout</h3>
<p>I've played with BERT in other tasks where I found there are some redundancies in BERT vector. Even though we only use a small portion (like 50%) of the BERT vector, we still can get desirable performance.</p>
<p>Based on this observation, I placed a dropout with a large rate just after the input layer, which can be considered as a kind of model boosting, just like training several prototypes with subsets that are randomly sampled from the BERT vector.</p>
<h3>2. Word Encoder</h3>
<p>As I mentioned in section 1, it might not be suitable to use the output directly because of redundancies. Therefore I use a word encoder to down-project the BERT vector into a lower-dimensional space where I can extract task-related features efficiently. </p>
<p>The word encoder is a simple affine transformation with SELU activation and it is shared for A, B, and P. I have tried to design the word encoder for names and pronouns independently or make the word encoder deeper with highway transformations but all of them results in overfitting.</p>
<p>This idea is also inspired by the multi-head transformation. I have implemented a multi-head NLI encoder but it only improved the performance by ~0.0005 and took much computation time. So maybe a single head is good enough for this task.</p>
<h3>3. Answer selection using NLI architectures</h3>
<p>I consider this task a sub-task of answer selection. Given queries A, B, and an answer P, we can model the relations between queries and answers with heuristic interaction:</p>
<p><code>
I(Q, A) = [[Q; A], Q - A, Q * A]
</code>
and then extract features from the interaction vector <code>I(Q, A)</code> with a siamese encoder. The overall architecture would be like this:</p>
<p><img src="https://i.imgur.com/WGJ9OPK.png" alt="Model"></p>
<p>Finally, here is a simple performacne report of my models:</p>
<p>| Model | 5 fold CV on Stage 1|
| -------- | -------- |
| Base BERT | 0.50 |
| Base BERT + input dropout | 0.45 |
| Base BERT + input dropout + NLI | 0.43 |
| Base BERT + all | 0.39 |
| Large BERT + input dropout | 0.39 |
| Large BERT + all | 0.32 |
| Ensemble of Base BERT and Large BERT | 0.30 |</p>
<p>I will make my whole repository public after cleaning up some messy things. Since I am occupied by certain exams currently, it might be updated a week later.</p>
<ul>
<li>Updated: <a href="https://github.com/zake7749/Fill-the-GAP">Github repository</a>, the code is still under cleaning.</li>
</ul> |
Gendered Pronoun Resolution | 5th Placed Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Gendered Pronoun Resolution
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Only a pleasure. Enjoy playing with hyper parameters, and don't forget to turn off server and tpu on google cloud when you are done.</p>
<p>Well done! That's an excellent score. If you run training again with exactly the same parameters, you might get a worse (or better) logloss score. For my final leaderboard score 0f 0.19, I used 3 training runs with sequence length of 64 and 3 with length 128. Then I averaged all 6 prediction files. The averaging improves logloss score dramatically, so 0.26 should be fine as one of the runs.</p>
<p>You are exactly right in your explanation of my truncating algorithm. My thinking was that I wanted to truncate as far away from P, A, B tokens as possible. So you could actually think of it as 6 segments:</p>
<p><code>`
x --- P --- x --- A --- x --- B --- x #x represents a delete point.
</code></p>
<p>The end segments were truncated on the outsides and the inner segments were truncated in the middle, away from P, A, B. I'm not sure if that sketch explains why the 2 middle segments were longer.</p>
<p>Changing the truncation algorithm seemed to have a big impact on results. So this would be an area for further research. Instinct tells me that truncating at the end of sentences where possible would be better than just counting tokens like I did.</p>
<p>By the way, <a href="/taniaj">@taniaj</a> thanks again for sharing your data corrections. Those also helped my results.</p>
<p>I'm also new to tensorflow and to machine learning as a whole, which is why started with their <code>run_cassifier.py</code> and modified that. I was very lucky to do so well in this competition.</p>
<p>I'm glad you got it running. I found large batch sizes helped results a lot, so did using BERT large instead of BERT base. For both of those you will need a TPU I think. I didn't use colab, I used google cloud platform. When you sign up, they give you one year and $300 free. A preemptible TPU is about $1.40 per hour and each training run only takes about 6 min for that data. There are a few configuration tricks, because you have to use google storage <code>gs://your_bucket/...</code> for all your files. Please ask if you get stuck.</p>
<p>Another thing I learned was that some bad logloss results don't really matter as long as the accuracy is good. Then when you average the output csv files from a few different training runs and clip the results that are too close to 0 or 1 then the logloss improves drastically. Two training runs with the same hyper-parameters can give significantly different log-loss results, but prediction accuracy is quite consistent. I got about 85% accuracy from BERT base and 89% for BERT large. I just did the averaging in pandas afterwards.</p>
<p><a href="/taniaj">@taniaj</a> sorry about the delay. I was planning to polish <a href="https://github.com/kenkrige/BERT-Fine-tune-for-GAP">my code</a> but time has not been on my side. There is not much of a readme so feel free to ask questions if you have problems running it.</p>
<h2>Summary</h2>
<p>My model was based on fine-tuning BERT-large with rudimentary output layer and no additional statistical features. The code in <code>run_classifier.py()</code> provided with BERT was modified to achieve this. I will post the <a href="https://github.com/kenkrige/BERT-Fine-tune-for-GAP">full code here</a> once I've neatened it up, but for now an overview of the main elements.</p>
<p>For the input layer, I sorted the 3 char offsets and stored the corresponding permutation of P, A, B and split the text into 4 segments at these offsets. Each segment was Wordpiece tokenized and the token segments were truncated to match the required sequence length. The choice of truncating algorithim made a big difference to the results. The token offsets were calculated as the cumulative sum of tokens in each segment. The token offsets were sorted back into P, A, B order and used to make binary mask “features” for the positions of P, A, B.</p>
<p>The output from BERT was masked to retrieve the 1024 dimensional vectors for P, A, B. Three new “probability embeddings” were computed as (element-wise multiplication):</p>
<ol>
<li>PA </li>
<li>PB</li>
<li>AB – PP </li>
</ol>
<p>The idea of the first two was to represent the similarity between P&A or P&B as abstract vector embeddings. For ‘Neither” the abstract embedding in 3 is supposed to represent the extent to which A&B are similar to each other, but differ from P. These embeddings (1024 dim vectors) were then reduced to a scalar probability by a trainable (1024, 1) tensor. The same tensor was used for the probs of A and B to prevent the model learning from whether “True” or “False” appeared first, as my own pseudo-data (not used in the end) was somewhat unbalanced in this regard. A seperate trainable tensor was applied to (AB – PP).</p>
<p>This architecture on a 64 token sequence gave a stage 1 score (corrected gap data) of 0.30 to 0.38 (single model), which ensembled to 0.275 by simple averaging of the predictions from 6 models. I assume the reason for the range of results has to do with the architecture being a bit unstable with respect to random initialisation.</p>
<h2>Input</h2>
<p>Within <code>run_classifier.py()</code>, changes were made to the input methods to suite the GAP data. A detailed explanation is done via <a href="https://www.kaggle.com/kenkrige/bert-example-prep">this</a> short kernel, which illustrates how a single example is prepared for the input layer. Pay particular attention to the way text was truncated to fit the chosen maximum token sequence. Changes to this algorithm made a big difference to results. I think that doing this truncating at a sentence level might bring further gains.</p>
<h2>Hidden Layers</h2>
<p>Vanilla BERT-large.</p>
<h2>Output</h2>
<p>Unfortunately, the 1024 dimensional vectors make it impractical to do a similar demonstration of the output. Instead I will give a short explanation of some of the code.</p>
<p>Firstly, the output was extracted and masked to get the P, A, B embeddings.</p>
<p><code>
all_out = model.get_sequence_output()
P = tf.boolean_mask(all_out, P_mask)
A = tf.boolean_mask(all_out, A_mask)
B = tf.boolean_mask(all_out, B_mask)
</code></p>
<p>This worked fine on a GPU, but <code>tf.boolean_mask()</code> is not implemented for TPU, which I stepped up to for BERT-large. It took a good few hours to figure out a messy workaround, but it did the job.</p>
<p>The probability embeddings were then calculated as explained above.</p>
<p><code>
PA = tf.multiply(P, A)
PB = tf.multiply(P, B)
PP = tf.multiply(P, P)
AB = tf.multiply(A, B)
N = tf.subtract(PP, AB)
</code></p>
<p>And finally reduced to probabilities by trainable tensors and a softmax.</p>
<p>```
AB_weights = tf.get_variable(
"AB_weights", [1, hidden_size],
initializer=tf.truncated_normal_initializer(stddev=0.02))</p>
<p>N_weights = tf.get_variable(
"N_weights", [1, hidden_size],
initializer=tf.truncated_normal_initializer(stddev=0.02))</p>
<p>A_out = tf.matmul(PA, AB_weights, transpose_b=True)
B_out = tf.matmul(PB, AB_weights, transpose_b=True)
N_out = tf.matmul(N, N_weights, transpose_b=True)</p>
<p>output_bias = tf.get_variable(
"output_bias", [num_labels], initializer=tf.zeros_initializer())</p>
<p>logits = tf.concat([A_out, B_out, N_out], axis=1)
logits = tf.nn.bias_add(logits, output_bias)
probabilities = tf.nn.softmax(logits, axis=-1)
log_probs = tf.nn.log_softmax(logits, axis=-1)
<code>``
That was about it. The rest of the code was pretty routine or already included in</code>run_classifier.py`.</p>
<p>That certainly made a big difference. Curiously, shorter sequences, all the way down to about 64 tokens, gave me the best single model results. Instinctively I had assumed that retaining as much information as possible would give the best results. The only explanation I can think of is that tokens close together influence each other's embeddings more, giving stronger relationship between P, A, B in the output layer. I need help in understanding this and would love to hear alternative explanations.</p>
<p>Actually, I should have said element-wise multiplication, as in tf.multiply(). Apologies for the error. I will correct it in the post.</p>
<p>The idea of the element-wise multiplication was to achieve something similar to a dot product, as described in the original BERT Paper. If the trainable output tensor were all 1s it would be exactly dot product.</p>
<p><strong>P dot A</strong> would give some measure of how similar the embeddings of the pronoun and name A are.</p>
<p><a href="/isikkuntay">@isikkuntay</a> I would be very happy to take a look, but I'm a newcomer to this field so might not be able to make much of a contribution.</p>
<p>Thanks <a href="/sattree">@sattree</a>. I suppose your coref pooling based on other models would have shortened the dependencies thereby reducing some of that noise. Well done again on your insightful solution.</p> |
Gendered Pronoun Resolution | 7th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Gendered Pronoun Resolution
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>paper is available at <a href="https://arxiv.org/abs/1905.01780">https://arxiv.org/abs/1905.01780</a></p>
<p><strong>Update on 4/30/2019</strong></p>
<p>After reading other solutions, I realized layer -5 and -6 are the best. For some reason I thought -1 to -4 were the only available layers to extract features from. I just trained my models again, with everything else unchanged except changing Pure Bert model from layer -3 and -4 to -5 and -6; End2end model from layer -4 to -5.</p>
<p>Stage 1 5fold CV score improved from <strong>0.2922</strong> to <strong>0.2846</strong>
Stage 2 Late submission score improved from <strong>0.19473</strong> to <strong>0.18075</strong> (only one submission, no LB probing)</p>
<p>I think this shows the augmentation approach can be as good as fine-tuning.</p>
<p>yeah I submitted to the workshop as well</p>
<p>regular papers which are anonymized for double blind review are prohibited for preprint release, but since this shared-task description paper is not anonymized (we pretty much described what we did in the Discussion here anyway), I think it's okay. See: <a href="http://www.acl2019.org/EN/call-for-papers.xhtml">http://www.acl2019.org/EN/call-for-papers.xhtml</a></p>
<p>Thanks Yury for all the discussion posts during the competition. </p>
<p>Thank you Matei. Your kernel was amazing! It gave us such a strong baseline in this competition. </p>
<p>About fine-tuning, I tried, but couldn't figure out a way to make it work in this (i.e. your original) set up. Without public LB in stage1, it was hard to gauge how I place, so I went all-in on augmentation and ensemble without fine-tuning.</p>
<p>Thanks Chanhu. And thanks a lot for your kernel!</p>
<h2>Scores</h2>
<p>Stage 1 LB: <strong>0.2929</strong> (trained on 2000 Test + 454 Validation, evaluated on 2000 Development)</p>
<p>Stage 1 5 Fold Out-of-fold: <strong>0.2922</strong> (train/evaluated on 2000 Test + 2000 Development + 400 Validation)</p>
<p>Stage 2: <strong>0.19473</strong> (trained on the same 4400 examples as above, I left the other 54 validation data for sanity check)</p>
<h2>Model</h2>
<p>I have a main model (referred to as “end-2-end” model below) and a secondary model (referred to as “pure bert” model below).</p>
<p>Both models use Bert features based on Matei’s <a href="https://www.kaggle.com/mateiionita/taming-the-bert-a-baseline">strong baseline</a> (Thanks Matei!) Pure bert model has the same architecture as in Matei’s kernel; end-2-end model uses the top level architecture in <a href="https://arxiv.org/abs/1707.07045">https://arxiv.org/abs/1707.07045</a>, as implemented by Chanhu’s <a href="https://www.kaggle.com/chanhu/bert-score-layer-lb-0-475">kernel</a> I also added 10 linguistic features from pheel’s <a href="https://www.kaggle.com/pheell/look-ma-no-embeddings">kernel</a> to the end-2-end model</p>
<p>For pure bert model, I concatenated layer -3 and -4; for end-2-end model, I only used layer -4 embeddings. I used Bert Large (both cased and uncased) with max token length 256 for both models.</p>
<p>I didn’t do any fine-tuning. </p>
<h2>Augmentation</h2>
<p>Bert embeddings of the names (A and B) contain information about both the context and the name itself. For coreference resolution task, it doesn’t matter whether a name is Alice or Betty or Carol or Debby. Therefore, I augmented the input data by replacing the A and B names by 4 sets of placeholder names:
<code>
[ { 'female':['Alice','Kate'], 'male': ['John','Michael']},
{ 'female':['Elizabeth','Mary'], 'male': ['James','Henry']},
{ 'female':['Kate','Elizabeth'], 'male': ['Michael','James']},
{ 'female':['Mary','Alice'], 'male': ['Henry','John']}]
</code></p>
<p>I chose these names by looking at the most common names in training data while making sure not to choose similar sounding names. If you pay attention to popular name trends in the U.S., you will notice all these placeholder names are old school names that are not as popular anymore (see e.g. <a href="https://www.behindthename.com/name/mary/top/united-states">https://www.behindthename.com/name/mary/top/united-states</a> and
<a href="https://www.behindthename.com/name/john/top/united-states">https://www.behindthename.com/name/john/top/united-states</a>). This is because in the training data, there are a large amount of articles about historical people.</p>
<p>I initially chose these 4 sets of names. Later I tried fewer, more, and different (more modern) names but didn’t find improvements.</p>
<p>For each input data row, I augmented it into 4 variations by replacing A by Alice, B by Kate if female, A by John B by Michael if male, etc. The bert embeddings are extracted 4 more times for these 4 augmented data. Then each training epoch will see each input data 5 times (original and 4 augmentations). This way, the embedding information about the particular names will be averaged out. Only the information about the sentence structure will be left. Or to put it in a different way, let’s say if 200 dimensions out of the 1024 Bert embeddings are about the name itself and the rest 824 dimensions are about the context, then the model will learn to rely on only those 824 dimension to resolve coreference and ignore the other 200 dimensions.</p>
<p>Another potential benefit of replacing all names by these one-word short names is that, if a name is long (first name + last name), it will be tokenized into many word pieces and make it harder for Bert to embed the necessary information into these token. This is only my guess, to be further tested. But I do have an evidence supporting this: in testing time, I did TTA (test time augmentation) the same way as in training. The order of the 4 augmentation’s scores varies depending on the model, but they all outperformed the un-augmented version, always. See the “original” row in the results table below.</p>
<p>When I did the name replacement, I replaced all the occurrences of the name in the document by placeholder names. In order to avoid confusion, I didn’t do the replacement in the following situations:
1. If the placeholder name (i.e. Alice or Kate) already appear in original document
2. If A or B is full name (first + last name), but the first name or last name appear alone elsewhere in the document. For example, if A is “Michael Jordan” but “Jordan” is used to referred to the person later in the doc, if I replace all the “Michael Jordan” in the doc by “Henry” then Bert would think “Jordan” and “Henry” are different people
3. If the name has more than two words, such as “Elizabeth Frances Zane” or “Jose de Venecia Jr”, I don’t replace it because it would be difficult to implement the rule above</p>
<h2>Incorrect ground truth</h2>
<p>As pointed in the Discussion, there are clearly wrong labels. I fixed 74 of them in development, and 85 in test and validation. (I did this before wayward’s <a href="https://www.kaggle.com/c/gendered-pronoun-resolution/discussion/81331#503094">post</a>. She reported similar number of corrections, so I assumed it would be similar without comparing hers to mine.)</p>
<p>One important thing is that, the model tends to make very confident predictions when trained with “clean” labeled, i.e. the output probabilities are very close to 0 or 1 for some rows. When scored with “dirty” labels, however, correct prediction of very small probability will lead to very large logloss. Therefore, I clipped all the output probabilities by a tuned threshold, usually 0.005 or 0.006</p>
<h2>Other changes</h2>
<p>I did the following changes to the Bert features extraction par of Matei’s kernel. Without them, the extracted token embeddings are misaligned in some cases.
1. There are two documents (209 in dev and 921 in test) too long to be correctly embedded using max sequence length 256. I wrote a function to throw away first few sentences in such scenarios to make all embeddings valid. But in stage 2 data, there is no such case.
2. In Matei’s kernel, it always skip the first 2 tokens. I changed it to skipping two tokens only if the second token is <code>"</code>, because sometimes (I think when there are quotations in document) actual tokens start at the index 1 instead of 2
3. I added
<code>if text=='#': return 1</code>
in functions <code>count_chars_no_special</code> and <code>count_length_no_special</code></p>
<h2>Validation strategy and ensemble</h2>
<p>Final model is <code>0.9 * end2end + 0.1 * pure bert</code></p>
<p>Below is the breakdown of the model’s OOF scores, ensemble components and weights.I tuned all the weights using clean labels, but reported the scores with dirty (i.e. actual) labels. Each column is a separate model. The top 5 rows in any column are the evaluation results on the 5 TTA variations from the same model.</p>
<p><img src="https://imgur.com/oXnZrd6.png" alt="results"></p>
<p>The model is trained on 4400 data in 5 folds. In each fold there are 3520 train data and 880 OOF validation data. Each 3520 training data is further divided into 5 folds, with each fold having 2816 training set and 704 early-stop validation set. As mentioned, the 2816 data is augmented into 14080 data per epoch. </p>
<h2>Code</h2>
<p>Code is available at: <a href="https://github.com/boliu61/gendered-pronoun-resolution">https://github.com/boliu61/gendered-pronoun-resolution</a></p>
<h2>Update on 4/30/2019</h2>
<p>After reading other solutions, I realized layer -5 and -6 are the best. For some reason I thought -1 to -4 were the only available layers to extract features from. I just trained my models again, with everything else unchanged except changing Pure Bert model from layer -3 and -4 to -5 and -6; End2end model from layer -4 to -5.</p>
<p>Stage 1 5fold CV score improved from <strong>0.2922</strong> to <strong>0.2846</strong>
Stage 2 Late submission score improved from <strong>0.19473</strong> to <strong>0.18075</strong> (only one submission, no LB probing)</p>
<p>I think this shows the augmentation approach can be as good as fine-tuning.</p>
<h2>Update on 5/6/2019</h2>
<p>paper is available at <a href="https://arxiv.org/abs/1905.01780">https://arxiv.org/abs/1905.01780</a></p> |
Gendered Pronoun Resolution | 8th place solution [LB 0.20138] | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Gendered Pronoun Resolution
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>My approach was based purely on fine-tuning large Bert models and heavy ensembling to leverage the instability of fine-tuned Bert models. I started by fine-tuning large Bert on several tasks, most of them about pronoun resolution (swag, definite pronouns, 2 flavours of PreCo and WinoBias). Each of those models is then fitted on the GAP dataset a number of times (10+). Afterwards, I ensemble, through simple averaging, the top 50% models by validation loss per task. At this point, I have 5 different set of test predictions (ensemble_swag.csv, ensemble_wino_bias.csv, ensemble_preco.csv ...). Ensembling those, again through simple averaging we arrive at our final prediction set.</p>
<p>Several pre-training task (QA, classification, etc...) have been tried and most hurt results. Bert large vanilla directly tuned on GAP also hurt overall results when added to the ensembling. It seems that, with the exception of Swag, only directly related fine-tuning task have yielded positive results. </p>
<p>Generally, the best hyperparameter for any fine-tuning has been: learning_rate = 1e-5, batch_size = 32, warmup_proportion = 0.1, epochs = 2. </p>
<p>I believe the reason ensembling worked so well is the instability of the model. Different seed could yield an evaluation loss from 0.33 to 0.40, so it seemed likely that those models learnt different things. Ensembling brought the stage 1 test loss from .33 to .296.</p>
<p>I had issues getting the conll2012 dataset but I believe the coreference task would of ensembled well with the rest.</p>
<p>Congratulations to all and thank you for organising such a cool competition!</p> |
Gendered Pronoun Resolution | 9th place solution overview [0.20238] | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Gendered Pronoun Resolution
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><a href="/mateiionita">@mateiionita</a> : I've updated my post with links to the paper and the source code! Hopefully that helps!</p>
<p>Thanks!</p>
<p>Thank you!</p>
<p><a href="/taniaj">@taniaj</a> Yes, that surprised me too! I've updated my post with links to the paper and the source code!</p>
<p>Thank you!</p>
<p>I've experimented with varied context sizes. But i think in my final model, i take about 5 tokens for context.</p>
<p>Hi <a href="/mateiionita">@mateiionita</a> . Thanks for the comment! The logits are over the entire context sequence. In our case, it is the full text. So I basically take the encoder hidden representations for each word-piece in the sequence and pass them through a dense layer with two outputs - one represents the start and other represents the end. So you get two sets of logits for each word-piece - <code>start_logits</code> and <code>end_logits</code>. Of these, I only filter the ones corresponding to the A, B and pronoun spans. So if there' s a sequence such as:
"<em>Bianca was Lady of Milan by her marriage to Galeazzo II Visconti.</em>", then we have (as an example):
i) <code>input_sequence</code> = ["[CLS] by her marriage [SEP] Bianca was Lady of Milan by her marriage to Galeazzo II Visconti. [SEP]"]
ii) <code>start_logits</code> = [-5.5, -6.6, -3.4, -2.4, -4.5, 5.67, -2.3, -3.4, 1.1, -7.8,-3.2, -1.1, -6.1, -3.6, 1.1, 2.2, 1.5, 2.1, -3.1, -5.5]
iii) <code>end_logits</code> = [-4.5, -2.6, -1.4, -3.4, -3.5, 7.24, -1.3, -3.9, 2.1, -7.9,-3.5, -1.4, -3.1, -2.6, 2.1, 1.2, -2.5, 1.1, -2.1, -4.5]
So the model assigns higher score for both start and end logits to the token "Bianca".</p>
<p>I want to start by thanking the Kaggle organizers for a great competition and all the community members for keeping the discussions alive and hosting amazing kernels. Secondly, I want to thank the Hugging Face team for open sourcing a fantastic library to play around with BERT models. My solution is a pure BERT fine-tuning based approach. It's an ensemble of three tasks - Question Answering (SQUAD), Multiple Choice (SWAG) and a span classification approach. Of these three, the most interesting one is the SQUAD model - I added the SWAG and span classification models much later - but they do add value. I've seen a couple of posts where people were curious about using a question answering system - so I will go into a bit of the detail about how I did it. I will also share the code once I do some cleanup and hopefully, that'll be of help too.</p>
<h3>Pro-reference resolution as a Question Answering Task</h3>
<p>This was the first modeling idea that came to my mind when I saw this contest. And I immediately saw promising results with BERT fine-tuning. The interesting bit of this task IMO is actually asking the right question. It didn't seem like a good idea to have the question be something like "What does "he/she" refer to?" because i) There can be multiple "he/she"s and there's no easy way to disambiguate that for the model and ii) That sort of question is more human-like than machine-like in the sense that the words "refer" etc won't really help the model get any better understanding of the task.</p>
<p>So, instead, I feed it the "context tokens" surrounding the pronoun to be resolved as the "question". The answer is the entity itself. It's fascinating that the model learns to do co-reference resolution with this fuzzy input - meaning it learned to resolve a pronoun to an entity name even though that isn't directly posed as a classification task. This model is actually more powerful than what's necessary for this competition - in the sense that it produces the entity name corresponding to the pronoun without using the knowledge of A and B choices. And secondly, because of the nature of the task, we get scores (logits) over the token sequence (and not between A, B and neither). So I use another simple logistic regression with the logits produced by the BERT QA model as inputs. The logits are only extracted for the relevant A, B, and pronoun spans. This logistic regression then gives us the necessary probabilities. There's one additional detail here - the presence of "Neither" category means we wouldn't have the exact answer for those examples. I dealt with this by removing those examples from the training data. This would hurt the performance a bit because of the reduced training data - but the good thing is that the model can always predict, by design, an entity that's neither A nor B - so there is no need to explicitly represent the "neither" examples during training. There are still other details that had to be taken care of - I'll try to elaborate them in a follow-up post when I share the code!</p>
<h3>SWAG & Span Models</h3>
<p>For the SWAG model, it's mostly just a clone of the existing "run_swag.py" example <a href="https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_swag.py">here</a> . Just had to do a few minor modifications to adapt to this challenge. And for the final SPAN extractor model - it's based off the amazing <a href="/ceshine">@ceshine</a> 's <a href="https://www.kaggle.com/ceshine/pytorch-bert-endpointspanextractor-kfold">kernel</a>. I use a lower cased tokenizer instead of the uppercased one being used there. And then I modify the weight initializations and tweak a few other hyperparameters.</p>
<p>Then, I do a 5-fold split stratified on the "gender" dimension (as opposed to "label/target" in some public kernels). It seemed like a reasonable choice given this competition is meant to address the gender bias. The predictions on the test set are the average of the predictions of each fold.</p>
<p>Finally, I just take an average of all the above three model's predictions. That's it. It's interesting that it performed well in a limited data setting without a single hand-crafted feature.</p>
<h3>Fine tuning</h3>
<p>I've implemented all of my kernels in pytorch. The number of layers to unfreeze required a lot of experimentation. The two choices that gave me good results are unfreezing either the last 6 or last 12 layers of the BERT large model. And I do this by setting the "requires_grad" attribute to False for all the parameters corresponding to those encoder layers.</p>
<h3>Failed experiments (a.k.a future work areas)</h3>
<p>1) I've not seen any success by fine-tuning all 24 BERT large encoder layers. In fact, I couldn't get nearly as good results if I'd fine tuned more or less than 6 or 12 layers. It still puzzles me as to why that is the case and why they are the magic numbers!
2) For the SQUAD/SWAG models, I tried concatenating an additional embedding vector that encodes the word-piece token level info of whether it belongs to one of A, B or P. I thought this should've been valuable but I couldn't get it to do well in this setting.
3) This was an interesting one. I tried fine-tuning the BERT model in an unsupervised manner by training a language model on the texts extracted from the Wikipedia pages corresponding to the URLs provided. The idea behind this one was to see if I can get better BERT layer representations by leveraging the text from the contest's dataset. This way, the model might have a better initialization point and that can be handy given the limited training supervision. But that didn't seem to help much either and it's a computationally expensive process to run. My guess is that BERT representations are anyway originally obtained by training on Wikipedia. So the fine-tuning on this GAP Dataset which is also based on Wikipedia is probably pointless?
4) I hurriedly tried throwing in the title extracted from the wikipedia page's URL into the token sequence for the SQUAD Model. But the initial results didn't seem promising.
5) I also tried throwing in a good number of hand engineered features from the public kernels to the models above. They didn't seem to help either.
6) I didn't do any data augmentation. So there might be some potential gains that could be obtained in that direction.
All of these experiments could've used a lot more love. So they definitely make great candidates for future exploration!</p>
<p>Overall, I had a great time contesting. Congrats to all the winners and good luck to all participants for future!</p>
<p>EDIT (10th June, 2019):
Link to the paper: <a href="https://arxiv.org/abs/1906.03695">https://arxiv.org/abs/1906.03695</a>
Link to source code: <a href="https://github.com/rakeshchada/corefqa">https://github.com/rakeshchada/corefqa</a></p> |
Gendered Pronoun Resolution | BERT -6 layer solution [LB 0.23101] | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Gendered Pronoun Resolution
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>My solution was based on <a href="https://www.kaggle.com/mateiionita/taming-the-bert-a-baseline">Matei's kernel</a> and <a href="https://www.aclweb.org/anthology/D17-1018">End-to-end paper</a>.
My model is a simple BERT(no fine-tuning) + FFNN model, and the FFNN structure is from this End to End paper, out of the box.</p>
<p>Feature:
Word vectors of A, B and Pronoun (from BERT large layer -6);
Distance between P and A/B;</p>
<p>Argumentation:
Swapping A and B, I got 0.006 improvement.</p>
<p>Hyper parameter:
Adam optimizer learning rate=0.0004;
FFNN hidden dimensions = 40;
drop_rate=0.65;
batchsize = 16;
10 folds cross-validation;</p>
<p>Thanks for your reading.</p> |
Gendered Pronoun Resolution | Solution sharing thread | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Gendered Pronoun Resolution
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Our solution was primarily based on the kernel from Matei. We used models using different BERT embeddings (cased, uncased, small / large) and also embeddings from Flair, Fasttext and finally combined them all. We also had some basic meta features to represent the position information. Our KFold CV on all three gap sets put together is ~0.4</p>
<p>Thanks to @mateiionita @kashnitsky @kenkrige @wochidadonggua @tks0123456789 and many others for their valuable insights </p>
<p>Look forward to the solutions of top people and the tricks to fine tune the BERT for this problem. </p> |
Gendered Pronoun Resolution | 🥈 Quick overview of the 26th place solution [LB 0.24838] | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Gendered Pronoun Resolution
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I look forward to reading your solution. </p>
<p>I was actually trying to catch up with your team this whole competition. 4 places and 0,02276 difference in stage 2 is good enough I guess :) </p>
<p>Congrats on getting another silver! </p>
<p>Btw, special thanks to <a href="/kashnitsky">@kashnitsky</a>. You were really helpful during this competition. And also to <a href="/mateiionita">@mateiionita</a> and his great kernels that provided a good baseline. </p>
<p>Congratulations to winners and my team <a href="/taniaj">@taniaj</a>, <a href="/dskswu">@dskswu</a>, <a href="/arigion">@arigion</a> </p>
<p>We didn't manage to fine-tune Bert, but still had a couple of ideas that worked pretty good. </p>
<ol>
<li><p>Used Bert-large with concatenated embeddings from -3, -4, -5, -6 layers. As <a href="/ceshine">@ceshine</a> pointed out in his great kernel (<a href="https://www.kaggle.com/ceshine/pytorch-bert-endpointspanextractor-kfold">https://www.kaggle.com/ceshine/pytorch-bert-endpointspanextractor-kfold</a>), it's better not to use the last layer. After some experimentation, we settled on -3, -4, -5, and -6 layers based on CV scores. </p></li>
<li><p>Used corrected and original data (<a href="https://www.kaggle.com/c/gendered-pronoun-resolution/discussion/81331#latest-503495">https://www.kaggle.com/c/gendered-pronoun-resolution/discussion/81331#latest-503495</a>). We couldn't decide which models to choose: the ones that were trained on original or corrected data. Until we tried to blend them and it worked great (0.34-0.35 for original/corrected and 0.33 after the blending for Public LB). Our 2 final submissions were only different in terms of weights in blending. We tried 0.6/0.4 and 0.5/0.5 for corrected/original data. Just like in Public LB, 0.6/0.4 split worked better (0.24838 vs 0.25200 Private LB)</p></li>
<li><p>Applied data augmentation inside each fold by simply swapping A and B columns and concatenating with original data. </p></li>
<li><p>10 folds, of course. </p></li>
<li><p>A simple NN (4 layers with BN, ReLU and dropout) on top of bert embeddings and linguistic features from public kernels. More complicated architectures didn't bring anything to the table. </p></li>
</ol>
<p>It's not as sophisticated as <a href="/boliu0">@boliu0</a> solution (great job, btw. Really impressive), but I still hope you find it useful. </p>
<p>You're welcome. </p>
<p>Yes, exactly. It worked better than extracting bert embeddings on the run. Or maybe there was a bug in my non-deterministic version. </p> |
Generative Dog Images | 1st place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Generative Dog Images
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>When n_classes=1, the squeeze() operation in <code>class Generator</code> causes the error because it deletes all the dimensions with length 1.
You need to change
<code>label_ohe = label_ohe.squeeze()</code>
to
<code>label_ohe = label_ohe.view(-1,1)</code></p>
<p>For parameter tuning, my suggestion is 1. learning rate, 2. number of features in G and D.
For further tuning, trying other loss functions maybe helpful. You can check other solutions shared in this competition.
BTW, how many epochs do you train ? It was only 9 hours (kaggle kernel's time limit) we could use in this competition. If you train much longer than I did in ver32 (about 130epochs), you may encounter training instability.</p>
<p>Did you change some hyperparameters? Generally speaking, training of GAN is unstable and sensitive to the hyperparameters. You can check a discussion for the instability in the BigGAN paper.</p>
<p>I'm not familiar with the behavior of DataParallel, but if it modifies the training procedure then you need to change some hyperparameters so that the training becomes stable.</p>
<p>Thanks <a href="/carlolepelaars">@carlolepelaars</a> ! For me LeakyReLU was slightly better than ReLU, but there were randomness on the score. I didn't try other activations but it's worth trying. </p>
<p>Thanks <a href="/theoviel">@theoviel</a> ! I agree with your guess.</p>
<p>First of all, thanks Kaggle team to have such a new competition and congrats to all the winners ! And special thanks to <a href="/cdeotte">@cdeotte</a>, <a href="/dvorobiev">@dvorobiev</a>, <a href="/jesucristo">@jesucristo</a>, who made this competition more interesting. I have learned many things about GANs through this competition. </p>
<p>To be honest, I’m very surprised at my final position. I thought I could not be in the gold medal zone, since my public score was not under 30.</p>
<p>Here is my solution(Public 31.09602, Private 55.42142).</p>
<p>・Preprocessing&Augmentations
- exclude images with extreme aspect ratio (y/x < 0.2 and y/x > 4.0)
- exclude images with intruders (thanks <a href="/korovai">@korovai</a> for sharing the kernel <a href="https://www.kaggle.com/korovai/dogs-images-intruders-extraction-tf-gan">https://www.kaggle.com/korovai/dogs-images-intruders-extraction-tf-gan</a>)
- use BoundingBox (no modification)
- Resize 64 (one side) and then RandomCrop to image size (64,64)
- HorizontalFlip(p=0.5)</p>
<p>・Model: BigGAN
- number of parameters G:10M, D:8M
- input noise from normal distribution (nz=120)
- use LeakyReLU
- attention on size 32 feature map
- use truncated trick (threshold=0.8)
- no EMA</p>
<p>・Loss: BCE loss
・Optimizer: Adam (lrG=3e-4, lrD=3e-4, beta1=0.0, beta2=0.999)
・Batch Size: 32
・Epochs: 130 (maximum kernel time limit, i.e. 32400sec)
・Others
- label smoothing 0.9 </p>
<p>At the beginning of this competition, I played with standard DCGANs, which got Public scores around 55. Then I used AC-GAN and got around 50. With ACGAN-projection, I could get around 39. Then I switched to BigGAN and jumped to around 31. But after that, I struggled to improve my score and could not get under 30. I have checked all of my submitted scores and found that there are some randomness. So I'm lucky to be in the final position. Anyway, I enjoyed this competition very much. Thank you!</p>
<hr>
<p>EDIT:
I made my kernel public <a href="https://www.kaggle.com/tikutiku/gan-dogs-starter-biggan">https://www.kaggle.com/tikutiku/gan-dogs-starter-biggan</a>
version32 is the 1st place solution</p>
<p>Thanks <a href="/dvorobiev">@dvorobiev</a> ! In G, I concatenated the breed classes (one-hot-vector) with split z and feed it into the ConditionalNorm in each layers. Also in D, I used the classes for projection (add label embedding to the output before sigmoid activation). </p> |
Generative Dog Images | 3rd place approach | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Generative Dog Images
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>My main goal for this competition is to get to know GAN technique. I often encountered the terminology of GAN in my readings, but did not know what GAN is about. This competition is a great opportunity for me to dive in and practice GAN technology. I want to thank Kaggle team for hosting such an innovative competition and the great efforts to ensure fairness of the competition!</p>
<p>I followed this article "Generative Adversarial Networks - The Story So Far"(<a href="https://blog.floydhub.com/gans-story-so-far/">https://blog.floydhub.com/gans-story-so-far/</a>) and tried various GAN techniques including DCGAN, CGAN, SAGAN, ProGAN, conditional ProGAN, StyleGAN and BigGAN. At last I settled down with SAGAN because it got me to the score 40+ in my first try and seemed more stable than other network structures. I mostly stick with it and tune it all the way from the score 40+ down to the public score 18. My model implementation is based on this github "self-attention-GAN-pytorch implementaion"(<a href="https://github.com/voletiv/self-attention-GAN-pytorch">https://github.com/voletiv/self-attention-GAN-pytorch</a>)</p>
<p>Here is the list of things I tried in optimizing the model,</p>
<ol>
<li><p>image processing
a) bounding box cropping only;
b) bounding box cropping for images with multiple objects,
for images with single object and box_size / image_size >= 0.75, use original images instead of bounding box.
c) bounding box cropping plus all original images</p>
<p>Among the three processing methods, b) gets me the best private score, c) gets me the best public score 18, but did not go well with the private dataset. </p>
<p>I learnt the image processing techniques from the kernels <a href="https://www.kaggle.com/cdeotte/supervised-generative-dog-net">https://www.kaggle.com/cdeotte/supervised-generative-dog-net</a> and <a href="https://www.kaggle.com/jesucristo/introducing-dcgan-dogs-images">https://www.kaggle.com/jesucristo/introducing-dcgan-dogs-images</a>. Special thanks to the kernel authors Chris Deotte and Nanashi!</p></li>
<li><p>losses
I experimented with losses such as "standard-dcgan", "wgan-gp", "lsgan", "lsgan-with-sigmoid", "hinge", "relative-hinge". My experience is the best one for this problem with various models is the "standard dcgan" loss, the second best is "lsgan-with-sigmoid" loss. </p></li>
<li><p>Attention at different stages
According to the paper "Self-Attention Generative Adversarial Networks"(<a href="https://arxiv.org/pdf/1805.08318v2.pdf">https://arxiv.org/pdf/1805.08318v2.pdf</a>), self-attention at middle-to-high level feature maps achieve better performance because it receives more evidence and more freedom to choose conditions with larger feature maps. My experiences mostly concur with the paper findings. I have attention at 32x32 feature map for the generator, but at 16x16 feature map for the discriminator.</p></li>
<li><p>Training settings
The learning rate for the discriminator is 0.0004, and the learning rate for the generator is 0.0002;
batch size is 32(tried 64 but the result is not as stable as 32);
label smoothing; thanks the kernel <a href="https://www.kaggle.com/phoenix9032/gan-dogs-starter-24-jul-custom-layers">https://www.kaggle.com/phoenix9032/gan-dogs-starter-24-jul-custom-layers</a> and its author Nirjhar Roy.
learning rate scheduler: CosineAnnealingWarmRestarts
number of epoch: 170</p></li>
</ol>
<p>5 the best latent space dimension size: 180 </p>
<p>My best private score kernel(66.97): <a href="https://www.kaggle.com/lisali/sagan-submit-2?scriptVersionId=18714508">https://www.kaggle.com/lisali/sagan-submit-2?scriptVersionId=18714508</a></p> |
Generative Dog Images | 8th place solution(private 89) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Generative Dog Images
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>code</h1>
<ul>
<li><p>kernel
<a href="https://www.kaggle.com/katotetsuro/stylegan-private-89?scriptVersionId=18524688">https://www.kaggle.com/katotetsuro/stylegan-private-89?scriptVersionId=18524688</a></p></li>
<li><p>github
<a href="https://github.com/katotetsuro/chainer-stylegan/tree/ac">https://github.com/katotetsuro/chainer-stylegan/tree/ac</a></p></li>
</ul>
<p>FYI: This scripting style is borrowed from this:
<a href="https://github.com/lopuhin/kaggle-script-template">https://github.com/lopuhin/kaggle-script-template</a></p>
<p>I prefer script style, which is easy to manage by git.</p>
<h1>summary</h1>
<p>To be honest, there is no special technique in my solution.
I chose stylegan, then looked for good parameters.
My stylegan starts with public score 140, then tuning parameters, I got 35.</p>
<ul>
<li>for preprocessing: crop image by bounding box, random horizontal flip</li>
<li>reduce input channel size of style generator to 256 because of resource limitation.</li>
</ul>
<p>all hyper parameters are shown in below.
<a href="https://github.com/katotetsuro/chainer-stylegan/blob/ac/src/stylegan/config.py">https://github.com/katotetsuro/chainer-stylegan/blob/ac/src/stylegan/config.py</a>
<a href="https://github.com/katotetsuro/chainer-stylegan/blob/ac/script_template.py">https://github.com/katotetsuro/chainer-stylegan/blob/ac/script_template.py</a></p>
<h1>trials that didn't work for me</h1>
<ul>
<li>Truncation trick from stylegan paper. I tried various truncation strength, but without truncation was the best.</li>
<li>Auxiliary Classifier</li>
<li>Data cleaning. I thought training data quality is quite important, so I did some procedure (dog face detection, manually picking good images..) but nothing worked.
Today, I read some kernels and learned there's some good techniques.
This is one of the room for improvement of my solution, I guess.</li>
</ul>
<p>It was really fun competition! Thanks for kaggle team!!</p> |
Generative Dog Images | BigGAN solution - public LB 15 | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Generative Dog Images
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi, <a href="/wucf20">@wucf20</a>
try to change <em>config['resolution'] = 64</em> in train.py at line 23
I hard-coded several competition-specific parameters, namely the image size and the number of classes. You may want to delete these lines from run() function in train.py and describe them as configurable parameters in prepare_parser() function from utils.py</p>
<p>I think I missed the MultiEpochSampler from the original code. It takes me only 02:43 for 1000 iterations with bs = 32 compared to ~4:30 without using it. Will see how far I can get with it...</p>
<p>First of all, I would like to thank organizers for such a wonderful competition. Generative modeling is a really interesting topic, and I hope to see more such competitions in the future. Also thanks to all participants for sharing their ideas and keeping the forum active :)</p>
<p>Here is a quick summary of my solution...</p>
<p><em><strong>Search for a good model</strong></em>
When I entered this competition, I had zero experience with GANs or other generative models. So I started with simple models like DCGAN or WGAN, but quickly switched to more complex solutions like Self-Attention GAN. The best score I could get out of SAGAN was 39-40 MiFID, and then I got stuck there. So I continued my search for more efficient models. I played with NVIDIA's StyleGAN for a few days, but didn't get any positive results using it. And then a found BigGAN. It took me a while to take the full advantage of this new model given the amount of settings it had.
Also I should mention the great pytorch repo, created by one of the authors of BigGAN, which made using this model a lot easier for me:
<a href="https://github.com/ajbrock/BigGAN-PyTorch">https://github.com/ajbrock/BigGAN-PyTorch</a></p>
<p><em><strong>Data processing</strong></em>
My goal with image transformations at first was to keep as much usefull information as possible. Using simple center cropping often leaves "headless" dogs, which might be a bit less recognisable by the scoring network. So I try to expand the usefull image area as much as possible and capture the whole dog or at least the more recognisable parts like head with upper body. I ended up not using center cropping at all and padding tall or wide images most of the time. Given how the FID metric is sensitive to missing modes, it's reasonable to keep most of the images, even the unhandy ones.
I focused mostly on geometric transformation. Pixel-level augmentation didn't bring any improvements for me. Even the most gentle changes to brightness or contrast usually degraded MiFID metric's value.
Also the choice of interpolation method matters, and it was a hard one for me. The popular and recommended way of downsizing images in OpenCV is using INTER_AREA, which gives the most visually appealing smooth images. But the MiFID value of resulting images usually is 1 point less than for other methods (like 10.x vs 9.x), and generated images I get with it are a bit worse. So I ended up using another resizing methods, which led to more sharp but noisy images. Which brought me to using denoise postprocessing algorithm on generated images :) </p>
<p><em><strong>Hyperparameters</strong></em>
I tried different number of channels for D and G, different placements of attention layer. I found it a lot easier to use generator with less parameters than default settings. I also ended up using unbalanced networks with D having twice as much channels in conv layers than G (I used mostly 32-64, but tried 28-56, 36-64, 48-72, 48-96). I didn't find any benefits using more complex generator. Additional complexity always slowed training down and resulted with a worse score with no clear visual improvements. My guess is the current task is too simple for a model designed to represent complex datasets likes ImageNet one. I guess equal reduction in channels in D and G should also work fine, but I left D with additional channels because it didn't influenced iteration speed much.
Despite the benefits of using higher batch sizes, described in BigGAN paper, I ended up using only 32 :) I believe, that the total number of iterations matters too, and a limited time window presents us with the choice of whether we want to update our models more often or more precise.
Also I use shared class-embeddings layers in G with hierarchical noise injection.</p>
<p><em><strong>Other things I tried:</strong></em>
1. BigGAN-deep. It never surpassed the perfomance of the original model for me.
2. truncation of z-vector. Didn't get much with this architecture, though for BigGAN-deep there were some improvements.
3. stochastic weight averaging (SWA). I usually get about +1-1.5 bigger MiFID metric value using it. But the choise of starting point matters and shifts all the time while I change data pipeline and other settings. So I decided to left EMA which came with the original code.
4. cyclic learning rates with cosine annealing and soft/hard warmups. With some settings it has a positive impact but tend to lack consistency.
5. different batch norm substitutes. Didn't spend much time on it, might be an area of potential improvements.
6. other activation functions. I remember gelu didn't perform very well, but it worked :)
7. relativistic losses. I tried it with several different architectures. For DCGAN it worked nicely, with SAGAN it didn't make much difference. I didn't managed to make RaHinge loss to work with BigGAN model. It might be a consequence of other setting I had chosen (like stronger discriminator compared to generator net).
8. other possible z distrubutions as described in the BigGAN paper (Appendix E). Bernoulli didn't work at all, censored Gaussian gave a bit worse results than usual normal distribution.</p>
<p>You can find full code for my solution here:
<a href="https://www.kaggle.com/dvorobiev/doggies-biggan-sub-data-aug-3">https://www.kaggle.com/dvorobiev/doggies-biggan-sub-data-aug-3</a>
<a href="https://www.kaggle.com/dvorobiev/doggies-biggan-sub-final">https://www.kaggle.com/dvorobiev/doggies-biggan-sub-final</a>
<a href="https://github.com/dmitry-vorobiev/kaggle-generative-dog-images">https://github.com/dmitry-vorobiev/kaggle-generative-dog-images</a></p> |
Generative Dog Images | Data Augmentation - LB 47 | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Generative Dog Images
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Typically VGGNet (DCGAN) style GAN architecture use <code>beta1=0.5</code>, <code>beta2=0.999</code> while ResNet (BigGAN) style GAN use <code>beta1=0</code>, <code>beta2=0.9</code>.</p>
<p>The question is whether you would benefit from more training. When trying to train a BigGAN in under 9 hours, the issue is that it doesn't have enough time to train. Most public kernels are shallow DCGAN and seem to have no trouble training in under 9 hours. </p>
<p>Variable <code>beta1</code> is training momentum. Without momentum (<code>beta1=0</code>) training is fast when the gradient is large and slower when gradient is small. With momentum, training stays at a more constant fast pace regardless of gradient size. If you set <code>beta1=0.5</code> for ResNet GAN it seems to train too fast and explode. However I added a little momentum (<code>beta1=0.2</code>) to speed things up and it helped without exploding.</p>
<p>I also increased <code>alpha=0.0002</code> to <code>alpha=0.0003</code> which is the learning rate and that helped. I decreased the size of the ResNet by using 48 channels instead of 64. There was no decrease in image quality but this shortened the iteration train time allowing for more iterations within 9 hours. Similarly a batch size of 32 is quicker than a batch of 64 with no decrease in quality. These two changes allow for 40000 iterations in under 9 hours.</p>
<p>Standard BigGAN does 1 generator update followed by 5 discriminator updates. All at <code>alpha=0.0002</code>. I'm now trying to do 1 generator at <code>alpha=0.0002</code> followed by 2 discriminator at <code>alpha=0.0005</code> because <code>2*0.0005 = 5*0.0002</code>. If this works I can do 80000 iterations within 9 hours instead of 40000.</p>
<p>Currently my BigGAN is at LB 42. I'll let you know if 80000 iterations improves that.</p>
<p>There are brilliant solutions here at Kaggle, so yes I believe the winners (Bestfitting, Dmitry, Yabea, MeisterMorxrc et al, Theo, Alvaro) discovered techniques that could advance state of the art GANs.</p>
<p>However if you compare the FID 12 images from BigGAN 128 <a href="https://www.kaggle.com/cdeotte/big-gan-128-lb-12">here</a> with the FID 12 images from Bestfitting's model <a href="https://www.kaggle.com/c/generative-dog-images/discussion/104281#600211">here</a>, you will notice that BigGAN 128 appear better to the eye which suggests that similar FID doesn't indicate that both models are now equal.</p>
<h1>Generative Dog Images Competition</h1>
<p>Thank you Kaggle for hosting an exciting competition. It was tons of fun and a great learning experience. Thank you everyone for great discussions and code sharing. </p>
<p>I wish we had more time. There are many more ideas to try. My first model is an improvement from my previously posted LB 100 CGAN. I added more training examples by employing random cropping and added additional auxiliary classification loss to the discriminator. Furthermore precompiling the training loop with <code>@tf.function</code> allows twice as many training epochs by doubling the speed.</p>
<h1>Solution 1: TensorFlow AC-RaLS-DCGAN, LB 52</h1>
<p>Kaggle notebook <a href="https://www.kaggle.com/cdeotte/dog-breed-acgan-lb-52">here</a>. Version 16 is my old CGAN scoring LB 100. And version 17+ is my new ACGAN scoring LB 52+.</p>
<h2>Image Preprocess</h2>
<p>The blue rectangles are the provided bounding boxes. I cropped all images to the yellow squares and resized them to 80x80 pixels. I chose a square that captures my best guess of where the dog head is and added padding beyond the bounding box to allow for random cropping during training.
<img src="http://playagricola.com/Kaggle/d12.png" alt="image"></p>
<h2>Data Augmentation, Random Cropping</h2>
<p>Each original image was cropped to a 80x80 square that included 25% extra image beyond bounding box (yellow squares below). This extra room allows us to randomly choose a 64x64 crop within (red squares below) and mostly stay outside the bounding box (blue rectangles).
<img src="http://playagricola.com/Kaggle/d13.png" alt="image"></p>
<h2>Additional Auxiliary Classification Loss</h2>
<p>The Generator learns via RaLSGAN loss which compares the 1 output unit from real images with the 1 output unit from fake images. The Discriminator learns via basic GAN loss by comparing the 1 output neurons plus we add to that binary crossentropy of dense 121 against the class labels. This converts the CGAN into an ACGAN
<img src="http://playagricola.com/Kaggle/disc4.jpg" alt="image"></p>
<h2>TO DO: Add Spectral Normalization and Batch Norm Modulation</h2>
<p>If I had more time I would add better regularization/normalization to my GAN. Then we could make the conv nets (gen and disc) bigger and train more without exploding. Next I would modulate batch norm with class labels and initial noise seed (like CBN-GAN, StyleGAN, and Self Mod).</p>
<h1>Solution 2: BigGAN, SN-CBN-ResNetGAN, LB 47</h1>
<p>With only 2 days remaining, I didn't have time to improve my GAN further so I searched GitHub for a version of BigGAN! I found a model like BigGAN <a href="https://github.com/pfnet-research/sngan_projection">here</a>. I copied and pasted the code into a Kaggle notebook and connected it to my training data pipeline described above. With no tuning it scored LB 47! BigGAN has the capability to score LB 12 as shown in this Kaggle notebook <a href="https://www.kaggle.com/cdeotte/big-gan-256-lb-15">here</a>. So I believe with more time, I could tune BigGAN to score under LB 30 in a 9 hour Kaggle kernel.</p>
<h2>My BigGAN Dogs</h2>
<p><img src="http://playagricola.com/Kaggle/resgan281419.png" alt="image"></p>
<h2>Update</h2>
<p>After a few experiments, my BigGAN achieves LB 42 with the following changes from default. Batch size to 32 (from 64), learning rates to Adam alpha=0.0003, beta1=0.2, beta2=0.9 (from 0.0002, 0, 0.9). Discriminator channels to 48 (from 64), generator channels to 48 (from 64), train for 40,000 iterations with exponential decay beginning at 32,000.</p>
<h1>References</h1>
<p>The following papers helped me learn about GANs:
* 2014 Basic GAN: <a href="https://arxiv.org/abs/1406.2661">Generative Adversarial Nets</a> by Goodfellow, et al
* 2016 GAN Tricks: <a href="https://arxiv.org/abs/1606.03498">Improved Techniques for Training GANs</a> by Goodfellow, et al
* 2016 GAN Tricks: <a href="https://arxiv.org/abs/1701.00160">NIPS 2016 Tutorial: Generative Adversarial Networks</a> by Goodfellow
* 2015 DCGAN: <a href="https://arxiv.org/abs/1511.06434">Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks</a> by Radford, et al.
* 2016 LS-GAN: <a href="https://arxiv.org/abs/1611.04076">Least Squares Generative Adversarial Networks</a> by Mao, et al.
* 2018 Ra-GAN: <a href="https://arxiv.org/abs/1807.00734">The relativistic discriminator: a key element missing from standard GAN</a> by Jolicoeur-Martineau
* 2017 ProGAN: <a href="https://arxiv.org/abs/1710.10196">Progressive Growing of GANs for Improved Quality, Stability, and Variation</a> by Karras, et al
* 2017 WGAN: <a href="https://arxiv.org/abs/1701.07875">Wasserstein GAN</a> by Arjovsky, et al.
* 2017 WGAN-GP: <a href="https://arxiv.org/abs/1704.00028">Improved Training of Wasserstein GANs</a> by Gulrajani, et al
* 2018 StyleGAN: <a href="https://arxiv.org/abs/1812.04948">A Style-Based Generator Architecture for Generative Adversarial Networks</a> by Karras, et al
* 2014 CGAN: <a href="https://arxiv.org/abs/1411.1784">Conditional Generative Adversarial Nets</a> by Mirza, et al
* 2016 SGAN: <a href="https://arxiv.org/abs/1606.01583">Semi-Supervised Learning with Generative Adversarial Networks</a> by Odena
* 2015 CatGAN: <a href="https://arxiv.org/abs/1511.06390">Unsupervised and Semi-supervised Learning with Categorical Generative Adversarial Networks</a> by Springenberg
* 2016 ACGAN: <a href="https://arxiv.org/abs/1610.09585">Conditional Image Synthesis With Auxiliary Classifier GANs</a> by Odena, et al
* 2018 CBN-GAN: <a href="https://arxiv.org/abs/1802.05637">cGANs with Projection Discriminator</a> by Miyato, et al
* 2018 SN-GAN: <a href="https://arxiv.org/abs/1802.05957">Spectral Normalization for Generative Adversarial Networks</a> by Miyato, et al
* 2018 BigGAN: <a href="https://arxiv.org/abs/1809.11096">Large Scale GAN Training for High Fidelity Natural Image Synthesis</a> by Brock, et al
* 2018 Self Mod: <a href="https://arxiv.org/abs/1810.01365">On Self Modulation for Generative Adversarial Networks</a> by Chen, et al</p>
<h1>Summary</h1>
<p>To build a great GAN, you first build two great convolutional networks (Gen and Disc). Use either ResNet (BigGAN) or VGGNet (DCGAN) architecture. Next choose a loss function. Hinge loss is the current favorite. (Others are GAN, RaLSGAN, WGAN, ACGAN, etc) Then to prevent training from exploding, add normalization/regularization such as weight clipping, gradient penalty (WGAN-GP), batch norm, or spectral normalization (SN-GAN), etc. For best performance add modulation via batch norm using either class labels (CGAN), style labels (StyleGAN), or random noise labels (Self Mod). Next optimize your hyperparameters; alpha, beta1 and beta2 of your Adam optimizer. Choose if the discriminator will train extra epochs versus generator. Lastly choose a dataset, add more data with data augmentation, and start training on a fast GPU !</p> |
Generative Dog Images | First competition 43.6 public score | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Generative Dog Images
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi </p>
<p>Here is our kernel: <a href="https://www.kaggle.com/shaygu/t3h-d0g-mak3rz-public-43-fid">https://www.kaggle.com/shaygu/t3h-d0g-mak3rz-public-43-fid</a> we specifically attached the version that calculates FID every few epochs, we think it brings more insight into the network and also that's the method we used for optimization .</p>
<p>Our knowledge was very limited walking into this competition, and we joined little more then a month ago, so we decided to start with the best public model and see where it gets us. Our steps along the way are further discussed in the kernel, however, I will briefly touch on the points also here. </p>
<p>Using a lot more parameters for the network, improved our score from 60 to 55. Calculating the FID every few epochs allowed us to observe which network performs better (more consistently), which lead to ability to introduce cyclic learning rate (efficiently now) and small hyper parameter optimizations got us to ~50. The last big step was changing the weight initialization to a broader distribution (0,0.15-1 instead of the 0,0.02 commonly used) got us to ~46 and from there to 43 its minor optimizations. </p>
<p>It was very interesting competition and we've learned a lot. Hopefully, there will be another GAN competition in the near future.</p>
<p><a href="/eviatarai">@eviatarai</a> <a href="/yogevh">@yogevh</a> </p>
<p><a href="/veeralakrishna">@veeralakrishna</a> Thanks for the reply, glad you liked it!</p>
<p><a href="/utkukubilay">@utkukubilay</a> Thanks, I appriciate the reply!</p> |
Generative Dog Images | Journey to LB 44.69 (ResNet DCGAN+Conditional BN) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Generative Dog Images
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi All
First, I want to thank Kaggle for holding this great competition, this is first GAN competition, I really enjoy and learned a lot from this competition, thanks everyone for great discussion and great kernel, thanks <a href="/cdeotte">@cdeotte</a> for detail explanation of GAN and good kernel share.</p>
<p>Second, Very thanks a great RaLSGAN kernel from <a href="/sakami">@sakami</a>, I fork it and do some experience to get my final solution, LB44.69, Below is my final submission kernel : <a href="https://www.kaggle.com/super13579/ralsgan-dogs-resnet-cbn?scriptVersionId=18722384">https://www.kaggle.com/super13579/ralsgan-dogs-resnet-cbn?scriptVersionId=18722384</a></p>
<h2>Journal to LB 44.69</h2>
<h3>1. Add convolution output channel count on Discirminator, LB 69 to 54</h3>
<p>Triple the convolution output channel count on Discirminator, LB 69 (original kernel) to LB 54
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F316036%2Fc971fbc7413bc7161bca99fd04a0c3bf%2Fexperience.JPG?generation=1566137754427868&alt=media" alt=""></p>
<h3>2. Add Pixel Normalize and spectral normalize on Generator, LB 54 to 52</h3>
<p>Add Pixel normalize and spectral normalize on Generator, LB 54 to LB 52</p>
<h3>3. Using Breed Label, LB 52 to 49</h3>
<p>I add the Aux layer on the last of discriminator, and choose a good weight to merge to loss function (fine-tune manually ), LB 52 to LB 49
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F316036%2F99b6b3274307a43255f302cf23b53c45%2Fexperience1.JPG?generation=1566138320702129&alt=media" alt=""></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F316036%2F8280dbba30b3340ab7d0259960c1d2da%2Fexperience2.JPG?generation=1566138413927201&alt=media" alt=""></p>
<h3>4. Add ResNet Block and Conditional Batch normalize, LB 49 to 44</h3>
<p>I saw recent GAN paper all use ResNet block and conditional batch normalize, so I try to implement them, ResNet Block for Generator and Discriminator, CBN for Generatro, LB 49 to LB 44, you can refer my final kernel to see how to implement them.</p>
<p>This is dog image from my best score kenrl, some head are missing XD
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F316036%2F4cc0a8e2c2d0cb91924ce42ecc25dc00%2FDog%20image.JPG?generation=1566139332159491&alt=media" alt=""></p>
<h2>Summary</h2>
<p>I think BigGAN is the biggest winner in this competition, I've saw some paper mention it, but I thought it will exceed the limit time (9hours), cause it looks like a "Big" model, so I didn't implement it, only implement Resnet Block and CBN, I learned a lot of GAN from this competition, thanks everyone for great discussion and great kernel! GAN is very interesting!</p> |
Generative Dog Images | My DCGAN solution LB 30.65 | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Generative Dog Images
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Yes, you are right. I forgot to mention that.</p>
<p>Hi there!</p>
<p>The notebook for my solution is available here : <a href="https://www.kaggle.com/alvaroma/dcgan-lb-30-65">https://www.kaggle.com/alvaroma/dcgan-lb-30-65</a></p>
<p>Keypoints:</p>
<ul>
<li>Pytorch DCGan</li>
<li>NZ 64, Batch 32, LRG 0.0004, LRD 0.0008</li>
<li>Cropped images, selecting only those with ratio 1:50</li>
<li>Transformations: Flip Horizontal, Random Crop</li>
<li>Early stop at best FID</li>
<li>Select best random images</li>
</ul>
<p>Good luck for all participants!</p>
<p>Thanks Chris. The idea was selecting "squared" images to better fit in 64x64, so W / H < 1.50 (or H / W if H > W). I tried with different ratios (1:25, 1:50, 1:75) and 1:50 seems to perform better.</p> |
Generative Dog Images | My Solution with Code (Public LB 45.6) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Generative Dog Images
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you for your question.</p>
<p>I changed alpha from 0.0002 to 0.0005 for a discriminator and n_dis from 5 to 2. I did not change the channel size. I cropped squares following bounding boxes and resized so that the length of shorter edge is 64, then I randomly cropped to 64x64. I set the random seed to 42.</p>
<p>After getting my best score, I changed my code to save the resized images to the hard drive and increased iterations, but I got scores around 46.8 (almost the same score with you). Moreover, when I changed the random seed to 0 without increasing iterations, I got 52.2. From here, I realized that I happened to find good hyper-parameters.</p>
<p>Thank you for all the participants in this competition.
I provide my solution that gets 45.6 in public LB.</p>
<h2>Solution</h2>
<p>My refactored code is available at <a href="https://github.com/fujibo/kaggle-dog">https://github.com/fujibo/kaggle-dog</a>.</p>
<p>My code is based on SN-GAN with projection discriminator <a href="https://github.com/pfnet-research/sngan_projection">https://github.com/pfnet-research/sngan_projection</a>.
I add the following modifications to the base code.
- Apply spectral normalization to the generator
- Increase the learning rate and reduce the number of critics of the discriminator
These techniques are used in SA-GAN, which outperforms SN-GAN.
I did not use the original SA-GAN code because I am not good at tensorflow.</p>
<p>Moreover, I tuned hyper-parameters as follows.
- change <code>batch size</code> from 64 to 128
- change <code>iteration_decay_start</code> from 0 to 18000
- change <code>iteration</code> to 24000</p>
<h2>Insights</h2>
<ul>
<li>For reducing training time, I produced dog images with 32x32 pixels followed by bilinear resizing, but obtained a worse score.</li>
<li>Different trials have different training time somehow, that is, trial A cannot finish 27000 iterations, while trial B can finish 28000 iterations.</li>
</ul> |
Generative Dog Images | Project: These dogs don't exist :) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Generative Dog Images
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I'll update it this week, just fork it, add something (images, code, add yourself as contributor at the README) and send me a pull request :)</p>
<p>Thanks to kaggle for launching this comp <a href="/juliaelliott">@juliaelliott</a> <a href="/wendykan">@wendykan</a> and motivate me to do this...</p>
<p>Before this competition I knew about GANs because this websites:
- <a href="https://thispersondoesnotexist.com/">https://thispersondoesnotexist.com/</a> (faces generated using GANs)
- <a href="https://thiscatdoesnotexist.com/">https://thiscatdoesnotexist.com/</a> (cats generated using GANs)</p>
<p>so I decided to do the same but with <strong>dogs</strong>!</p>
<p>> <a href="https://mv-lab.github.io/thisdogdoesnotexist/">https://mv-lab.github.io/thisdogdoesnotexist/</a></p>
<p>I'm hosting it using github, until I get the domain. I would like to ask permission to the winners, to upload the images you have generated and code to that repository.
The idea is to show 100 images from those amazing models and people will be able to know wich model is generating them, and obviously the author.
I'll try to generate 256x256 images :)
<strong>Anyone can contribute</strong>, in fact I ask you to do so.
btw, the actual pictures are from <a href="/tikutiku">@tikutiku</a> (Tom, the winner) thanks!</p>
<p><a href="/yukia18">@yukia18</a> <a href="/lisali">@lisali</a> <a href="/bestfitting">@bestfitting</a> <a href="/dvorobiev">@dvorobiev</a> <a href="/theoviel">@theoviel</a> <a href="/markpeng">@markpeng</a> <a href="/katotetsuro">@katotetsuro</a> <a href="/johannl">@johannl</a> <a href="/seefun">@seefun</a> <a href="/jionie">@jionie</a> <a href="/leonshangguan">@leonshangguan</a> <a href="/phoenix9032">@phoenix9032</a> <a href="/francoisdubois">@francoisdubois</a> <a href="/manojprabhaakr">@manojprabhaakr</a> <a href="/super13579">@super13579</a> <a href="/alvaroma">@alvaroma</a> <a href="/cdeotte">@cdeotte</a> <a href="/shaygu">@shaygu</a> <a href="/hirune924">@hirune924</a> ....</p> |
Generative Dog Images | Public LB 15.81605 solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Generative Dog Images
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks. I also changed the original BigGAN only a little (shared embedding, hierarchical latent noise, projection, and exponential moving average are explained in <a href="https://arxiv.org/pdf/1809.11096.pdf">paper</a>). Key points that I listed improved my simple DCGAN, so I think they are key points.</p>
<p>Here is supplementary matters of my settings.
- Batch size 64 and 128 worked, but 32 and 256 didn't.
- I tried hinge loss, but it didn't work.
- In original BigGAN, learning rates of generator and discriminator are 5e-5 and 1e-4 respectively, but they were too small on 9 hours training.
- Channel sizes of generator and discriminator are less than half of originals. I tried imbalanced channel size GANs but didn't work.</p>
<p>We used same architecture basically, but settings are quite different. It is interesting.</p>
<p>Here is one of my final submitted kernels (I selected quite similar two kernels for final submission). </p>
<p><a href="https://www.kaggle.com/yukia18/sub-rals-ac-biggan-with-minibatchstddev">[Sub] RaLS AC-BigGAN with MinibatchStddev</a>
Version 3 got public LB 15.81605.</p>
<p>FID emphasizes sample diversity too much. My images generated from roose threshold (2~) truncated latents are not good as for quality but get good score. Once you download images.zip from my kernel, you can see that my generated images are crappy quality but diversed.</p>
<p><strong>Key points:</strong></p>
<ul>
<li>Model: based on BigGAN
<ul><li>shared embedding of dog breed labels</li>
<li>hierarchical latent noise</li>
<li>projection</li>
<li>auxiliary classifier (ACGAN)</li></ul></li>
<li>Loss: RaLS with weighted auxiliary classification loss</li>
<li>Batch size: 128</li>
<li>Exponential Moving Average of generator weights </li>
<li>Uniform input noise on real images</li>
</ul> |
Generative Dog Images | Public LB 30 Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Generative Dog Images
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi everybody !
The code for my solution is available here : <a href="https://www.kaggle.com/theoviel/conditional-progan-30-public">https://www.kaggle.com/theoviel/conditional-progan-30-public</a>
I spent some time yersteday cleaning it and commenting it a bit.</p>
<p>Overall, I took a PyTorch ProGAN implementation from GitHub about 20 days after the beginning and sticked with it for the rest of the comp. I quickly reached 32 but couldn't get much further. Guess I should've moved to BigGan which seemed to perform better :)</p>
<p>I'm definitely going to enjoy reading other's solutions though.
Special thanks to the Kaggle team for the extra effort put into the competition, hopefully reviewing kernels won't be too painful. </p>
<p>Did not read about VGan, i'll take a look at it though!</p>
<p>ProGAN has more than the progressive growing trick, it has pixelnorm, minibatch std, ema weight decay and other interesting stuff, that definitely help the nn converging.</p>
<p>Thanks Chris! The paper is indeed great, lots of tricks and everything.
I wish I did the implementation, but I pretty much took the architecture from <a href="https://github.com/akanimax/pro_gan_pytorch">https://github.com/akanimax/pro_gan_pytorch</a>
My first tries at manually implementing the arch were not that successful.</p> |
Generative Dog Images | When BigGAN met StyleGAN (public 12.4 solution) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Generative Dog Images
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I have read you solution, and I found we did a lot of similar experiments and got similar results, it confirmed a lot of effective and not so effective parts of BigGAN and others architecture, it's very interesting and useful. </p>
<p>If you read my source codes, you will find I have moved many interesting mechanisms to BigGAN framework, for example, you can use my mix-style parameter to train a model to get similar effects of the SytleGAN. so this may be one of tricks to train StyleGAN efficiently by removing progressive training? But I have not tested on larger datasets and large image sizes, I believe it will work. </p>
<p>I wrote a script to convert the project to a .py file indeed.</p>
<p>It's inconvenient to edit the source codes in a kernel, so I wrote codes in my local computer and converted them to a file and paste it to the kernel. I am cleaning the codes.</p>
<p>I have just put a link in my main post, you can check the images, a class of dogs per line. When the metric is FID, the variety is important.</p>
<p>It's an interesting competition, and this is the first time I write a solution without knowing private LB score. Great thanks to kaggle for the efforts to host this competition and quick response to the community by <a href="/juliaelliott">@juliaelliott</a>, <a href="/wendykan">@wendykan</a>. </p>
<p>My solution may be described as a mix of StyleGAN and BigGAN.</p>
<p>Respect to the authors of the papers and these two github repositories:
<a href="https://github.com/ajbrock/BigGAN-PyTorch">https://github.com/ajbrock/BigGAN-PyTorch</a>
<a href="https://github.com/rosinality/style-based-gan-pytorch">https://github.com/rosinality/style-based-gan-pytorch</a></p>
<p><strong>Challenges</strong>:
1. Limited resources and limited time to train a GAN from scratch.
2. A new kind of challenge with new rules and settings. It took me 2 weeks to choose between this one and the Youbute 8M video challenge.</p>
<p><strong>Solution</strong>:</p>
<p>I tried DCGAN and ACGAN to get familiar with this competition at first , and then select papers related, I selected about 40 papers which might be useful.</p>
<p>I was impressed by the results of the StyleGAN and tried to make it work more efficiently so I could make it run in a kernel, but I found Progressive GAN and its followers were inefficient to train, but I still spent about 10 days to tune it and got FID about 35 , I guessed that this score was not enough to top 10, but I thought I should spend more time on it to get understanding of every detail of it before I stepped forward.</p>
<p>Then I tried VQ-VAE2 and a lot of other mechanisms and I was not satisfied with the efficiency and the scores.</p>
<p>BigGAN was listed on the top of my most promising solutions, but the <strong>BIG</strong>, <strong>DEEP</strong> in the model name prevent me from trying it in a hurry and the paper stated that using more channels was important, I worried that it might be inefficient in this competition, so I planed to try it after I have tested others. To my surprise, the channels was not so important, when I decreased the G_ch and D_ch from 64 to 24, the FID score did not decrease too much and the FID score can easily reach below 30.</p>
<p>What's more, the project structure of the <a href="https://github.com/ajbrock/BigGAN-PyTorch">https://github.com/ajbrock/BigGAN-PyTorch</a> is very good, we can do a lot of experiments by just modifying some parameters, so I could start many experiments and went out have a bicycle driving or swimming :) After several experiments, my score reached about 24. </p>
<p>After I checked and compared the architecture and mechanisms of the Big and Style GAN, I found that Spectral normalization and Class-conditional Batch normalization were important to BigAN, so we need not use progressive training, and BigGAN's G_shared and skip_z mechanisms were similar to the StyleGAN's style mechanism to some extend, so I made a lot of modifications to BigGAN and tried to move almost all important mechanisms of StyleGAN to it. After many experiments, I found: Style mechanism could improve the score significantly and coverage more quickly, the Blur was also important, Noise was not so important, to save time, I did not add it to my networks. Since I had more efficient and powerful tools for the BigGAN, so I did not use skip_z. I found mix-style mechanism of StyleGAN was not important in this competition, so I did not use it in my final submission. After these efforts and some hyper-parameters tuning, the score can reach about 17. </p>
<p>I thought I might use large image size to train the network, so I designed a 96x96 model instead of 64x64, I found it could improve the score.</p>
<p>I tried to use Attention including BigGAN's attention and CBAM in the network, it increased the training time and the improvement to the score was not so big, so I did not use them</p>
<p>To my surprise, although the paper of BigGAN stated that the larger batch-size, the better result, I found 32 was the best batch-size. So the two most important results--large channels and bigger batch-size of the BigGAN were not suitable of this dataset, so my solution should be called <strong>Style-Small-GAN</strong> ?? :smile::smile:</p>
<p>My best parameters can be described as the following command:</p>
<pre>python train.py --shuffle --batch_size 32 --num_G_accumulations 1 --num_D_accumulations 1 --num_epochs 200 --num_D_steps 1 --G_lr 1e-4 --D_lr 6e-4 --dataset DogOrigin96 --bottom_width 6 --G_ortho 0.0 --G_attn 0 --D_attn 0 --G_init ortho --D_init ortho --ema --use_ema --ema_start 2000 --test_every 25 --save_every 10 --num_best_copies 5 --num_save_copies 2 --G_ch 24 --D_ch 24 --seed 0 --augment 1 --add_blur --add_style --crop_mode 8 --on_kaggle --experiment_name i96_ch24_hinge_ema_dstep1_bs32_noatt_glr0001_glr0006_aug_init_ortho_blur_style_origin_crop_mode8
</pre>
<p>The score improved to <strong>12.4</strong> on public LB after these steps</p>
<p>In the last several days of this competition, I turned to the model's generalization and the data itself, I found that if I cropped square box containing the dogs only, it performed better than adding resized origin images, but, as kaggle will use another datasets and my models were trained on all-dogs datasets, they might over-fit to the cropped all-dogs to some extend, when my models were applied on other images, the dogs might not be cropped, so I added resized origin images to the datasets with some probability. The score on public LB of this augmentation is <strong>13.4</strong>, but I guess this model may perform better than 12.4 one.</p>
<p>I also added all kinds of mechanisms , such as mode seeking, all kinds of loss functions such as such as GDPP, normalizations, regularizations , initializations, and most of them did not work.</p>
<p>Most of mechanism I talked above are kept in the kernels and can be tested by using the parameters. If you want to have a try, 100 epochs in 4.5h can reach FID 18.x, and 125 epochs can reach FID 15.x</p>
<p><strong>What can we do to improve the model if there are no limits on the resources and time?</strong>
1. I designed a mechanism to add dog count in an image to the model, that's to say, dog count embedding. It works, but it took more time.
2. We can still cluster the images to 30-40 clusters, use these cluster id as other embedding, so, the dog on the grass, floor..... ( I did not test it )
3. We can use large sized images.</p>
<p>So the FID less than 10 is not so difficult to achieve if we have two GPUs or double time.
1 and 2 can be described as Multi-Classes-Conditional Style GAN, with the mixture of the Style and BigGAN, after some well designed experiments we may make some progress to the state of the art GANs?
You can check some of images, a type of dogs per line:
<a href="https://raw.githubusercontent.com/bestfitting/kaggle/master/gandogs/samples/samples_mode8.jpg">https://raw.githubusercontent.com/bestfitting/kaggle/master/gandogs/samples/samples_mode8.jpg</a></p>
<p>The link to my kernels:
<a href="https://www.kaggle.com/bestfitting/i96-1e-4-6e-4-bs32-195">https://www.kaggle.com/bestfitting/i96-1e-4-6e-4-bs32-195</a>
<a href="https://www.kaggle.com/bestfitting/i96-crop-mode3-dstep1-1e-4-6e-4-bs32">https://www.kaggle.com/bestfitting/i96-crop-mode3-dstep1-1e-4-6e-4-bs32</a></p>
<p><b>Updated</b>
The source codes in the kennels above are not easy to read, you can check the normal python project structure version here <a href="https://github.com/bestfitting/kaggle/tree/master/gandogs">https://github.com/bestfitting/kaggle/tree/master/gandogs</a></p>
<hr>
<p><em>Please note that the progresses and the scores of my models are not a formal ablation study in a paper and I don't plan to write a paper either, I modified the parameters and other setting during these progresses, they were results of a series of complex experiment setting.</em></p> |
GeoLifeCLEF 2022 - LifeCLEF 2022 x FGVC9 | 2nd place solution description | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: GeoLifeCLEF 2022 - LifeCLEF 2022 x FGVC9
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Well, I guess it's my time to explain the solution that clocked in second on the private leaderboard.<br>
Similar to the post by the winners (congratulations!) I will not go into every single detail; this will be made available through the technical report.</p>
<p><strong>General overview</strong><br>
My solution is based around deep convolutional neural networks (CNNs) that process the satellite remote sensing (RS) products only. In detail, the base model consists of two CNN feature extractors (standard image classification architectures that had their final, fully-connected classification layer dropped). They run in parallel and ingest different parts of the RS imagery: the first receives the RGB portion of the dataset, the second a stack of altitude, near-infrared (NIR) and NDVI (normalised difference vegetation index: (NIR-red)/(NIR+red)). I previously used the land cover map as input instead of the NDVI, but providing continuous models like CNNs with a discrete map of ordinal class indices is non-straightforward and not imminently sensible, hence the NDVI.</p>
<p>The two feature extractors process their assigned three-channel imagery in parallel, but do not share parameters. They each output a latent feature vector of the same size, which gets concatenated, subjected to heavy dropout (probability of 0.45 worked best), and converted into per-class activations by a single fully-connected layer that maps to the 17,034 species. A standard softmax-cross-entropy loss is used to train the model (more on that below).</p>
<p><strong>What helped improve accuracy</strong><br>
I ran extensive experiments and tried many different ideas; many without success (as is usually the case). However, the following four provided sufficient individual boosts in accuracy:</p>
<ol>
<li><em>Correct model architectures:</em> I commenced with a ResNet-50 for each of the two feature extractor branches. This worked reasonably well, but I got an improvement of almost 2% simply by switching to Inception-v4. I also tried DenseNet-201, which performed similarly to ResNet-50. More complex and recent architectures were on the list as well, including ConvNext and a Vision Transformer (ViT B/16). Those two took forever to train, despite decently powerful hardware (see below), and just resulted in massive overfitting (35% top-30 on train, 5% on val).</li>
<li><em>Pretraining:</em> deep learning models oftentimes require some form of pre-training to be able to adapt well, due to the enormous search space with millions of free parameters. Different means of pre-training were on my list, but what eventually worked best was to simply use the weights from ImageNet pre-training. See below for other ideas.</li>
<li><em>Spatial block-label swap:</em> while the two points above are important to make the model work in the first place, this here is the most relevant for the performance of my model by a long shot. This is a training concept that boils down to the presence-only label problem we are facing: a non-observation of a species at a particular location does not imply that the species is not there. This circumstance, together with the fact that each data point is assigned just one single species, technically violates the assumption made with by-the-book heuristics like cross-entropy loss: this one assumes that each data point has exactly one correct label and that all other classes are to be zero (it minimises the entropy in the predicted probability space). While the GeoLifeCLEF dataset is set up this way, this causes the prediction problem to be ill-posed. It is thus prudent to find a training strategy that takes this into account. A popular idea is to use a temperature softmax, but this only softens the blows for the model to a certain extent. My solution, the spatial block-label swap, attempts to relax the strict one-class requirement as follows: it first creates a spatial grid of square cells (0.01x0.01° lat/lon) and assigns each training point to its encompassing cell. During training, it then performs a look-up for each data point and randomly replaces its label with one of the neighbours in the same grid cell with a probability of 10%. This is a simple but sort of intuitive means of regularisation, and it helped the model gain another 2% of accuracy. I tried different replacement chance probabilities, as well as more sophisticated strategies (e.g., swapping according to encounter probabilities of a species within the grid cell), but the most simple method worked best.</li>
<li><em>Ensembling:</em> as with many contests (I believe), an ensemble of different models worked best. For the submission I ensembled ten different models trained; some ResNet-50, some Inception-v4, one DenseNet-201. Each was trained with different slight alterations (e.g., different learning rates, different random seeds), but the general concept always stayed the same. I ensembled the output probabilities, but in a pseudo-confidence-based manner: I performed test-time augmentation (also important) and recorded the variance of confidence across the augmentations for each test sample and each model. I then normalised and inverted the variances per model, so that they were bound to [0,1], and applied a softmax across all ten model runs. This provided me with a crude notion of model confidence. Multiplying the softmax scores with the mean model confidences across augmentations and summing them together gave the final prediction score, from which I drew the top-30 species. Ensembling gave yet another ~2% boost over the single models.</li>
</ol>
<p><strong>What I tried without success</strong><br>
A setup and challenge like GeoLifeCLEF absolutely invites to try out many different ideas. Providing an exhaustive list of tricks I attempted is beyond the scope of this thread, but here's some of the more relevant ones that simply did not want to work:</p>
<ul>
<li><em>Other covariates:</em> I tried plugging in the environmental rasters (one value extracted per data point), as well as the GPS coordinates, into the model. None of this worked well. I attribute reasons to the difficulty of normalising features against the RS-emerging ones and the difficulty in extracting features in the first place (going from environmental covariates to latent features requires usage of an MLP, which is always a bit messy). The winners have done the better move here by using a non-DL model for the covariates, such as a Random Forest. It shows that DL isn't always the answer (which sounds ironic, given that this is only what I used). Otherwise, encoding geospatial coordinates is not straightforward either. Since we only have the contiguous U.S. and France as study areas, we don't have the periodicity problem, but it still is nontrivial to use those. An MLP trained on the coordinates only, sometimes raw, sometimes sine-cosine-encoded, and once with a more advanced encoding (Theory-guided spatial encoding), just led to severe underfitting. I believe more research on this multimodal covariate integration would be an interesting avenue to explore further.</li>
<li><em>Auxiliary prediction tasks:</em> at some point I tried predicting not just the species class, but the other layers of the taxonomy tree (genus, family, kingdom) with individual fully-connected layers on top. That worked ok, but did not improve species classification.</li>
<li><em>Predicting histogram densities:</em> with the spatial block-label swap strategy explained above, I tried predicting the occurrence histogram per grid cell per species. This provided too faint a learning signal, though.</li>
<li><em>Advanced pre-training:</em> besides standard ImageNet weights, I tried self-supervised pre-training (in particular MoCo-v2, which is what the winner of last year's challenge used), and an own form of Model-Agnostic Meta-Learning (MAML); resp. the more lightweight alternative Almost No Inner Loop (ANIL). I also tried training models from scratch. In the end, ImageNet is what performed best.</li>
<li><em>Addressing the long-tail:</em> the species classes are severely long-tailed (i.e., thousands of species have less than ten images and a few make up the vast majority of the dataset). That strongly confuses machine learning models by default, with the result that the rare species never get predicted. I tried many ideas to cope with this, from loss weights over special losses (e.g., Balanced Softmax) to ANIL pre-training (see above). In the end, doing nothing about it worked best—for a very simply reason: the test set could be assumed to be as unbalanced in species classes as the training and validation sets. Hence, giving more weight to the rare species is actually the opposite of what one wants to maximise performance. I could probably even have dropped many of the rare species with possibly performance gains, as the model had a less complicated task to solve (I didn't do that, though).</li>
</ul>
<p><strong>Setup</strong><br>
I distributed training to three machines: two workstations (16-core CPU, 128 GB RAM, NVIDIA GeForce RTX 3090 each; Ubuntu 20.04 LTS) and an HPC (NVIDIA Tesla V100; RHEL 7). I used Python 3.8.10 and implemented my solution in PyTorch 1.9.0. Models came from either Torchvision (ResNet-50) or the PyTorch Image Models (TIMM) library (Inception-v4, DenseNet-201, Inception-v4, ViT B/16).</p>
<p><strong>Lessons learnt</strong><br>
This would be a big one to cover. It was difficult to get to the grounds of the performance of models, due to the sheer number of species classes. However, gradually getting a feel for how the models perform over different experiments was possible to an extent and certainly helped in settling in on standard parameter sets and avoiding pitfalls. There is a slew of ideas to be tested still, such as advanced data augmentation (I only used random flips, addition of Gaussian noise and normalisation), grid-searching hyperparameters, and other architectures I didn't try (EfficientNet, for example). Otherwise, adequately using all available covariates on the one hand, and further exploring tweaks about the label situation on the other, are probably worth studying.</p>
<p>Apologies for the long post; I hope it was at least somewhat insightful. Full details on the solution will be given in the technical report.</p>
<p>Many thanks for everyone involved: the dataset creators, contest organisers, data collectors, and of course the competitors! It was quite exciting to see developments on the leaderboard, especially towards the end. Huge congratulations to the winners; your solution reads fantastically and that win is more than well-deserved!</p>
<p>Thank you.</p> |
GeoLifeCLEF 2022 - LifeCLEF 2022 x FGVC9 | 🥇 1st place solution description | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: GeoLifeCLEF 2022 - LifeCLEF 2022 x FGVC9
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I was thinking how to explain it without spending too much hours but as clear as possible, probably later I will make a graphical representation of the entire data pipeline, models and inference and also maybe a notebook with some preprocessing functions and our best part of the entire experimental pytorch lightning pipeline, but for now I will share what I was writing on the Competitiors Feedback form about our best solution and what we tried that didn't work well. Sorry if I'm not accurate or I don't explain it with more details, it is probably a little bit too dense, specially for those who haven't been fully involved on the competition. What's more, for those of you who can speak or understand Spanish, you can take a look at my Team partner <a href="https://www.kaggle.com/juansensio" target="_blank">@juansensio</a> Youtube channel were he shared a couple of Twitch lives were he was starting to solve this challenge and he defined the pytorch lightning workflow pipeline, updating metrics to Weights and Biases: <a href="https://www.youtube.com/c/sensio-ia" target="_blank">https://www.youtube.com/c/sensio-ia</a></p>
<p><strong>Best solution description:</strong></p>
<p>Ensemble of 3 models: <br>
1- The first one was a bi-modal network with Nir+G+B on a pretrained resnet34 stacking its final layer to a FCN of 3 layers with the inputs of the environmental vectors + lat + lon + country + alt mean + max-min alt + "dothot" encoding (this is how I called the somehow softmax-onehot encoding) of landcovers, these two backbones were connected to the final 17k class layer. </p>
<p>2- The second model was similar to the previous but the CNN was a mobilenetv3 100 large pretrained model with the input of R+G+B+Nir, the FC network was the same of the previous one and between the two stacked final layers of these two models we added an extra Linear Layer of 2048 with dropout and ReLu and then the final 17k layer for classification. </p>
<p>3- The third model was a Random Forest with 32 estimators and a depth of 12, using as inputs the same as the previous FC networks: the environmental vectors + lat + lon + country + alt mean + max-min alt + dothot encoding (softmax-onehot encoding) of landcovers, and also in addition the 25, 50 and 75% percentiles of each of the R/G/B/Nir layers, so 81 input features in total, adding also the validation data to the training. </p>
<p>In addition the first two models had Data Augmentation on the CNN models of random vertical and horizontal flips, rotations and 5-10% of Brightness and Contrast. We also implemented Test Time Augmentation of 5 random image transformations for each sample and then we merged these with the mean probabilities of every prediciton. We applied this same strategy of the mean probablities to merge the 3 models ensemble. Finally we tried to retrain a little further the models adding validation data to train data and it improved a little but probably we could do it better. </p>
<p>Our setup is a rtx3090 with 24gb of vram, with a 12th gen i7 and 64gb of ram from my side and Juan has also a heavy dutty rig with two rtx3090. It's been so much helpful that amount of VRAM for the CNNs but also I needed so much CPU RAM for the Random Forests with 17k classes.</p>
<p><strong>Things that we also tried but performed worse:</strong></p>
<p>We tried to aggregate close labels to have multi label observations, we tried it in different ways and different loss functions but none resulted better than single labels, but something tells me that there has to be a way to make it work. Regarding to labels aggregation, I noticed that at some spots there were up to some hundreds of different labels together, which means that there is a theoretical minimum top30 error to achieve as at some point, even with an ideally perfect model, you will have to bet on 30 labels among 50, 100 or more which are really correct in that location. </p>
<p>We also tried other backbones in the CNN models, we tried to train without transfer learning on these, we tried to put three different backbones for 1. RGBNir + 2. Alt tiffs + 3. Landcover tiffs and then adding this to the FC tabular data backbone to name the most relevant. None of these gave us the best results but probably there is room for some of those to make it better. I also tried gradient boosted trees but with 17k classes this would need probably around a terabyte of RAM I guess.</p>
<p><strong>Conclusion:</strong></p>
<p>At the end I value very positvely our experience participating in this competition, I personally learnt and practised some different new techniques and types of data but I would like to highlight two: the use of multi-modal networks were you can mix structured and unstructured data with their own backbones to finally merge them in the same final layers and also to tackle a problem of presence-only data, something that I have never before encountered or thinked of, but it could really be in other areas like medicine, financial and more, where you can have latent or hidden variables.</p>
<p>Thank you very much to the organizers and the Kaggle team. Also a big aplause to all the people who collect this kind of ecological bio data out there, it's incredible to have almost two millions of data points together with this level of diversity. And finally good job to all the other teams and members who have participated and have motivated us to keep pushing until the last day, I wish that our notes give you all more knowledge and I'm curious to see and learn from what else has been tried and done to achieve almost as good results as ours!</p>
<p>Best Regards!</p>
<p>Enric Domingo</p>
<p>Hi Louis, find here a Linkedin link with a visual schema of that implementation (apparently it is not possible to attach images right now here): <a href="https://www.linkedin.com/posts/e-domingo_geolifeclef-kaggle-ml-activity-6935271949359067136-1HnT?utm_source=linkedin_share&utm_medium=member_desktop_web" target="_blank">https://www.linkedin.com/posts/e-domingo_geolifeclef-kaggle-ml-activity-6935271949359067136-1HnT?utm_source=linkedin_share&utm_medium=member_desktop_web</a></p>
<p>Also I would mention that maybe I unintentionally missed some details like the sklearn random forest parameter "criterion" which I changed from default "gini" to "entropy", it took double the time to train but with better results. Probably there are some other missing things like the number of perceptrons in the MLPs of the bi modal networks which were 256, 512, 1024 if I remember it correctly with dropout in every layer of 0.2 and maybe the way to preprocess data and the data augmentation strategy. A part from that I could be still missing something else and also you would need at least 24gb of vram in your gpu to train with same batch sizes as I did and 64gb of ram to fit the random forest model and train it. Finally maybe the training seed could also affect by having different trainig results but for this case I didn't use a deterministic process and a fixed random seed. Sorry for not being more precise but there were many experiments and little changes and it's been difficult at the end to keep and explain just what worked and discard many other things that we tried and performed worse.</p>
<p>Thank you so much Marília! Now it's that time of year again for the CLEF competitions, including the GeoLifeCLEF 2023. (Although I'm currently focused on other competitions, I'm not participating in GeoLifeCLEF this year :P)</p> |
Global Wheat Detection | 185th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Global Wheat Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>If you are asking about in which way I have done the ensemble:
Step 1: Generate prediction1 from 5 fold d2 (used wbf)
Step 2: Generate prediction2 from 5 fold d4 (used wbf)
Step 3: Finally generate the final prediction by merging prediction1 and prediction2 (used wbf)
One one fold also worked but achieved slightly less score than the above method( 0.7591 compared to 0.7595 public LB)</p>
<p>Thanks <a href="/piantic">@piantic</a>. Yes, it is round=1. A minor update has been made to the post. I actually have up-sampled the pseudo test data twice and trained for 1 epoch. This took more than 7 hours. So running more than 1 epoch would have resulted in notebook timeout error(>9h).</p>
<p>Thanks to the organizers and kaggle team for hosting this competition! Also great thanks to <a href="https://www.kaggle.com/rwightman" target="_blank">@rwightman</a> for his efficientdet-pytorch repo, <a href="https://www.kaggle.com/zfturbo" target="_blank">@zfturbo</a> for his weighted-box-fusion repo, to <a href="https://www.kaggle.com/shonenkov" target="_blank">@shonenkov</a> for his amazing public notebook based on which I started this competition and to everyone who has been active in the discussion forum during the competition. Finally congratulations to all the fair prize winners and to the fair medal winners.</p>
<p><strong>Summary:</strong></p>
<ul>
<li>5 fold d2 trained on 1024x1024 + 5 fold d4 trained on 512x512</li>
<li>Heavy augmentations (Added more augmentations along with the alex's public kernel augmentations for example: Noise augment, Scale and crop, this gave me 0.01x-0.2x boost)</li>
<li>Used weighted-boxes-fusion for ensemble</li>
<li>5xTTA</li>
<li>Pseudo labeling(1 epoch, 1 round, training data + 2x pseudo test data)</li>
</ul>
<p>All models ensemble with 5xTTA and wbf achieve 0.7563 public LB/ 0.6956 private LB. Adding pseudo labeling improves the score to 0.7595 public Lb/ 0.7050 private LB. </p>
<p><strong>Did not work for me:</strong></p>
<ul>
<li>Cleaning the bounding boxes</li>
<li>Gradient accumulation</li>
<li>Cutmix and Mixup scores were not as good as the models without them</li>
<li>WBF threshold Tweaking </li>
<li>Tweaking efficientdet loss function hyper-parameters </li>
</ul> |
Global Wheat Detection | 1st place solution [MIT-Compliant] | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Global Wheat Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>điên à, đã xong đâu :(</p>
<p><a href="https://arxiv.org/pdf/1911.09070.pdf">https://arxiv.org/pdf/1911.09070.pdf</a></p>
<p>I didn't experiment much on the model without external dataset
(Ed7 768 Fold0 Public LB 0.7403 without external)
NVIDIA Tesla V100 32GB + 2080 Ti 11GB</p>
<p>Cảm ơn em!</p>
<p>Thank you!</p>
<p>Thank you!</p>
<p>Cảm ơn em.</p>
<p>Thank you!</p>
<p>Thank you!
Because the SPIKE dataset is very noisy, I warm-up model 20 epochs on trainset+wheat2017+spike without mixup, then train model 80 epochs on trainset+wheat2017 with mixup.</p>
<p>Thank you, I will publish later.</p>
<p>yes, all models are trained with mixup</p>
<p>gradient accumulation didn't work for me
effdet ed7 768: batch size 8
faster RCNN fpn resnet152 1024: batch size 20</p>
<p>Thank you!</p>
<p>Thank you!</p>
<p>Thank you!</p>
<p>There are more images without boxes in the public test set than validset, the gap between valid and public is about 0.04 for me</p>
<p>Thanks to the organizers for this interesting competition!<br>
Congratulation to all, this is my first solo gold medal. As there are lots of great public notebooks posted, I will keep my solution simple and short.<br>
Special thank to <a href="https://www.kaggle.com/rwightman" target="_blank">@rwightman</a> for amazing repo (<a href="https://github.com/rwightman/efficientdet-pytorch" target="_blank">https://github.com/rwightman/efficientdet-pytorch</a>)</p>
<p><strong>Summary</strong></p>
<ul>
<li>Custom mosaic data augmentation</li>
<li>MixUp</li>
<li>Heavy augmentation</li>
<li>Data cleaning</li>
<li>EfficientDet</li>
<li>Faster RCNN FPN</li>
<li>Ensemble multi-scale model: Weighted-Boxes-Fusion, special thank <a href="https://www.kaggle.com/zfturbo" target="_blank">@zfturbo</a> </li>
<li>Test time augmentation(HorizontalFlip, VerticalFlip, Rotate90)</li>
<li>Pseudo labeling</li>
</ul>
<p><strong>Data Processing</strong></p>
<ul>
<li><p>Custom mosaic augmentation:<br>
Mosaic is a data augmentation method that combines 4 training images into one for training (instead of 2 in CutMix). Instead of randomly cropping a piece of the image, I create custom mosaic augmentation as bellow to keep the border information:<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1393170%2F6c0205c6ac7299bc0ea644811521e4c3%2Fmosaic.png?generation=1596590064167930&alt=media" alt=""></p></li>
<li><p>MixUp: <br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1393170%2Fda4e1aaafcc421760391e099b4ffca11%2Fmixup.png?generation=1596590747904905&alt=media" alt=""></p></li>
<li><p>Heavy augmentation:<br>
RandomCrop, HorizontalFlip, VerticalFlip, ToGray, IAAAdditiveGaussianNoise, GaussNoise, MotionBlur, MedianBlur, Blur, CLAHE, Sharpen, Emboss, RandomBrightnessContrast, HueSaturationValue</p></li>
</ul>
<p>examples after apply mosaic, mixup, augmentation<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1393170%2Fe226aefe53eab2e2a87a304cd2a84c5e%2Fmixup.png?generation=1596590604527225&alt=media" alt=""></p>
<ul>
<li>External data<br>
I use 2 external data:<br>
*wheat spikes (<a href="https://www.kaggle.com/c/global-wheat-detection/discussion/164346)" target="_blank">https://www.kaggle.com/c/global-wheat-detection/discussion/164346)</a>, for license please refer (<a href="https://www.kaggle.com/c/global-wheat-detection/discussion/164346#928613" target="_blank">https://www.kaggle.com/c/global-wheat-detection/discussion/164346#928613</a>)<br>
*wheat 2017 (<a href="https://plantimages.nottingham.ac.uk/" target="_blank">https://plantimages.nottingham.ac.uk/</a>) at post (<a href="https://www.kaggle.com/c/global-wheat-detection/discussion/148561#863159)" target="_blank">https://www.kaggle.com/c/global-wheat-detection/discussion/148561#863159)</a>. <br>
I create annotations (bounding box) for all images and crop to 1024x1024. I contacted author to ensure the right without license issue.</li>
<li>Data Cleaning<br>
*Deleted tiny bounding boxes (width or height < 10px)<br>
*Fixed too big bounding boxes</li>
</ul>
<p><strong>Model</strong></p>
<ul>
<li>5 folds, stratified-kfold, splitted by source(usask_1, arvalis_1, arvalis_2…)</li>
<li>Optimizer: Adam with initial LR 5e-4 for EfficientDet and SGD with initial LR 5e-3 for Faster RCNN FPN</li>
<li>LR scheduler: cosine-annealing</li>
<li>Mixed precision training with nvidia-apex</li>
</ul>
<p><strong>Performance</strong><br>
Valid AP/Public LB AP</p>
<ul>
<li>EfficientDet-d7 image-size 768: Fold0 0.709/0.746, Fold1 0.716/0.750, Fold 2 0.707/0.749, Fold3 0.716/0.748, Fold4 0.713/0.740</li>
<li>EfficientDet-d7 image-size 1024: Fold1,3</li>
<li>EfficientDet-d5 image-size 512: Fold4</li>
<li>Faster RCNN FPN-resnet152 image-size 1024: Fold1</li>
<li>Ensemble 9 models above using wbf can achieve <strong>0.7629 Public LB/0.7096 Private LB</strong> (old testset)</li>
</ul>
<p><strong>Pseudo labeling</strong></p>
<ul>
<li>Base: EfficientDet-d6 image-size 640 Fold1 0.716 Valid AP</li>
<li>Round1: Train EfficientDet-d6 10 epochs with trainset + hidden testset (output of ensembling), load weight from base checkpoint<br>
Result: [old testset] 0.7719 Public LB/0.7175 Private LB and [new testset] 0.7633 Public LB/0.6787 Private LB</li>
<li>Round2: Continue train EfficientDet-d6 6 epochs with trainset + hidden testset (output of pseudo labeling round1), load weight from pseudo labeling round1 checkpoint<br>
Result: [old testset]0.7754 Public LB/0.7205 Private LB and [new testset]0.7656 Public LB/0.6897 Private LB</li>
</ul>
<p>Github source code at <a href="https://github.com/dungnb1333/global-wheat-dection-2020" target="_blank">https://github.com/dungnb1333/global-wheat-dection-2020</a></p>
<p>Thank you</p>
<p>(horizontal flip, original image)x(vertical flip, original image)x(rot90, original image) = 8 TTA</p> |
Global Wheat Detection | 2nd Place Solution with Code [MIT-Compliant] | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Global Wheat Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>[update] The code is now available at <a href="https://github.com/liaopeiyuan/TransferDet" target="_blank">https://github.com/liaopeiyuan/TransferDet</a> .</p>
<p>Now that the results are out, I've decided to share a bit more than <a href="https://www.kaggle.com/c/global-wheat-detection/discussion/172433" target="_blank">my previous post</a>.</p>
<p>First of all, huge thanks to <a href="https://www.kaggle.com/rwightman" target="_blank">@rwightman</a> for his PyTorch implementation of EfficientDet and <a href="https://www.kaggle.com/shonenkov" target="_blank">@shonenkov</a> for his amazing starter kernel. I've also learned a lot from the discussion posts.</p>
<p>This is an interesting challenge with a very noisy dataset and huge domain shift between train/test, and within train/test as well. The kernel running time limit also gives possibility to do semi-supervised learning, which can in a way reduce the aforementioned problem.</p>
<p>The first choice I made was the ways I validate and perform model selection. This is a tough choice because 1. object detection is inherently a hard task and 2. the training data are noisy and unstable.<br>
Eventually my decision was "relative detection loss" on a 20% stratified split with bad boxes removed (by manual inspection), as hinted <a href="https://www.kaggle.com/c/global-wheat-detection/discussion/166564#927359" target="_blank">here</a>. My reasoning is as follows:</p>
<p>We know that train and test distributions are wildly different, so mAP to me would make little sense because it's a heavy processed result (with hyperparameters in between of nms, and which mAP to look at, etc.) And detection loss we use to train our networks can capture the overall classification and regression performance. In addition, I don't use this loss to compare across models, and instead I accumulated a list of procedures that I considered as "comparable under loss," by pure empirical results and a little bit of instinct, and used them to select minor choices such as augmentations. For model selection overall, I took the public LB with a grain of salt, and in a way because I ran out of time to train diverse models, I didn't have to make those huge choices of model ensemble. I believe that I can investigate into this more once the entire set is released and cleaned.</p>
<p>Now, on to data insights. We can all agree that the data, even with blatantly bad boxes removed, are really noisy, in two forms: noisy anchor and noisy boxes. </p>
<ul>
<li>For noisy boxes, this is the direct result of different understanding of wheat heads by annotators (including an automatic one, Yolov3). So I ran a few experiments to ascertain the noise. I degraded the boxes' x,y coordinates by 10% and 5% respectively (kind of the norm on articles robust object detection), and trained with my previously best pipeline. Interestingly, for both degradation we've seen a drop in model performance, so in a way the data is more informative than a 5% shift. My models were extremely costly to run, so I didn't run any more tests or designed experiments that could reveal more about bbox noise, but I had a basic idea of how to proceed. </li>
<li>Noisy anchors, on the other hand, is a little tricky, because the problem is not just about disagreements between labellers. Remark that our training data are 1024 crops of larger pictures, and (correct me if I'm mistaken) annotation is only done on crops, with the wheat heads on edges only getting annotated if 1/3 of the head is visible. This is kind of crucial because APs in this competition has a huge impact on the resulting mAP performance. Models that are too confident over the border (identifying all wheat heads no matter if it's over 1/3 or not, sometimes even leaves mistakenly) or too pessimistic both degrades final performance. This also explains my jigsaw trick, which I will explain in the following paragraphs. Finally, the source images are really different. One could train a simple CNN classifier and classify the source with validation accuracy 99.99%. Even so, you can learn a metric learning model on say 4 sources and achieve 80%+ homogeneity when extracting embedding on the remaining 3 sources that the model has never seen.</li>
</ul>
<p>Therefore, all the models I developed are trying to address these issues. I first tried DetectorRS, but only managed to get pass 0.72 LB. So I moved on to the EfficientDet pipeline. I ran some simple tests from D4 to D7, and concluded that D6 with COCO-pretrained weight is the best. Here are my reasonings.</p>
<p>D0-D5: not large enough<br>
D7: Remark that D7 got its precision by larger input size (and hence larger anchors). I really couldn't fit 1536-resized images on my GPU and get reasonable results.<br>
D6 with noisy student/AdvProp pretrained cls backbone and random-initialized FPN/box, cls net: Object detection is quite hard. I couldn't make these models converge.</p>
<p>The next thing I did was trying to improve baseline performance of my EfficientDet-D6 model. I tried over 10 different techniques by dissecting the timm-efficientdet repo line by line and hacking into them, but turned out simplicity and elegance prevailed. The best performance is achieved by the default setting of the D6 model (with anchors unchanged; I tried KNN clustering of anchors in train set but it gave worse results), with default huber loss and focal loss. Augmentation wise, it's just an empirically deduced recipe:</p>
<pre><code> A.Compose(
[
A.RandomSizedCrop(min_max_height=(800, 1024), height=1024, width=1024, p=0.5),
A.OneOf([
A.HueSaturationValue(hue_shift_limit=0.2, sat_shift_limit= 0.2,
val_shift_limit=0.2, p=0.9),
A.RandomBrightnessContrast(brightness_limit=0.2,
contrast_limit=0.2, p=0.9),
],p=0.9),
A.ToGray(p=0.01),
A.HorizontalFlip(p=0.5),
A.VerticalFlip(p=0.5),
A.RandomRotate90(p=0.5),
A.Transpose(p=0.5),
A.JpegCompression(quality_lower=85, quality_upper=95, p=0.2),
A.OneOf([
A.Blur(blur_limit=3, p=1.0),
A.MedianBlur(blur_limit=3, p=1.0)
],p=0.1),
A.Resize(height=1024, width=1024, p=1),
A.Cutout(num_holes=8, max_h_size=64, max_w_size=64, fill_value=0, p=0.5),
ToTensorV2(p=1.0),
],
p=1.0,
bbox_params=A.BboxParams(
format='pascal_voc',
min_area=0,
min_visibility=0,
label_fields=['labels']
)
)
</code></pre>
<p>and for <code>__getitem__</code>:</p>
<pre><code>if self.test or random.random() > 0.33:
image, boxes = self.load_image_and_boxes(index)
elif random.random() > 0.5:
image, boxes = self.load_cutmix_image_and_boxes(index)
else:
image, boxes = self.load_mixup_image_and_boxes(index)
</code></pre>
<p>Remark that mixup is the 1:1 version; I didn't get the beta version to work (I tried to reweight the boxes during loss calculation; I got worse results). Then I tuned the WBF over TTA (8x, because 16x has duplicates) pipeline a bit with basic sanity checks and exclusion of boxes that have low score or are too big or small. The overall baseline gave public LB of around 0.753.</p>
<p>Then, I realized that similar to the TGS competition I participated few years back, we can get more data by doing jigsaws. You can check out one of the popular public kernels for how to restore the original full-sized images. After this, I realized two more problems: </p>
<ol>
<li><p>Boxes over the edge are fractured in the sense that if a wheat head appears in half in one image and in half in another, there will be two boxes in the final image. This will generate huge amount of noise and will certainly degrade model performance.</p></li>
<li><p>This is more subtle. Remark that in the data insight section I talked about the conundrum of boxes over the edge. Now, if we are now cropping 1024 patches from the larger image, the resulting distribution is actually different than the one represented by what we now can see as crops at special positions. I didn't really document the changes in model performance, but it is indeed significant enough for training.</p></li>
</ol>
<p>My solutions is as follows:</p>
<ol>
<li><p>I tried to correct every single label manually but got bored after 30 minutes. So I wrote an automatic procedure that does that for me:</p>
<ul>
<li>Get all boxes that have one of the coordinates divisible by 1024</li>
<li>Prune those over the edge of the big picture</li>
<li>Generate the segments over the edge and match it to box (aka we only care about the side on the edge)</li>
<li>Calculate pariwise-IoU over edges</li>
<li>Pair boxes using a greedy approach</li>
<li>Fuse boxes<br>
I then used the corrected data to train my models. And, to encourage APs gain for my models, The images have a 0.5 chance of feeding into the network without cropping, e.g., resizing an 2048x2048 down to 1024x1024 instead of cropping it from the larger image. I did some ablation experiments and this does improve model performance.</li></ul></li>
<li><p>What I did here is basically treating images obtained from jigsaw as "pseudo-labels," or simply images that come from different distributions.</p>
<ul>
<li>The first model I developed is based on the paper "A Simple Semi-Supervised Learning Framework for Object Detection". Basically, the resulting loss is a weighted sum of loss calculated by original image and jigsaw image separately. </li>
<li>Then, I remembered the trick introduced in the paper "Adversarial Examples Improve Image Recognition", where images of different distributions goes through different BatchNorms. So I copied the batchnorm stats from baseline and replicate it twice, and during 2-nd stage training original data and jigsaw data passes through different norms, and the stats are calculated separately. During inference, I use the distribution of the original data. I tried extended scenario such as a 7-split norm by source, or average of different splits of norms during inference, all of which gave improvements locally, but not on public LB, so I didn't investigate further.</li></ul></li>
</ol>
<p>I also designed a detection variant of FixMatch, but I couldn't fit it in my GPU with image size = 1024.</p>
<p>Finally the semi-supervised learning part was pretty mundane, as fancy techniques (STAC, SplitBatchNorm, Pi-Model, Mean Teacher, etc) did not fit under the kernel eval time frame. All I did was to freeze the backbone to allow training with 16GB vram, and different hyperparameters to generate the pseudolabels. You can see the details in my open-sourced kernels. Most choices are more empirical than insightful.</p>
<p>Sorry for this messy solution journal as working alone gives you a bad habit of trying too many stuff and not documenting well enough. Also, since I'm working alone, I didn't have enough time to investigate in depth some of the ideas I brought up above. I will seek to improve the solution journal in the upcoming days, and maybe write a tech report if I have time.</p>
<p><a href="https://www.kaggle.com/anokas" target="_blank">@anokas</a> Mikel is way earlier than I am when he got his GM status :)</p>
<p>You are right. I must've misread the albumentation APIs. It still changes the pixel values by 1-2 on average though. I could run a few more experiments to determine if this part is crucial to my solution.</p>
<p>Update: the code is released at <a href="https://github.com/liaopeiyuan/TransferDet" target="_blank">https://github.com/liaopeiyuan/TransferDet</a> .</p> |
Global Wheat Detection | 5th Place Solution (EfficientDet) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Global Wheat Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you for host and competitors.<br>
It is interesting competition for me.</p>
<h1>TL;DR</h1>
<ol>
<li>EfficientDetB3 5Fold -> 10 image Test with Pseudo Labeling</li>
<li>EfficientDetB5 All Training(EMA) -> Prediction MultiScaleTraning(1280,768,1024+4Flip) 0.766</li>
<li>EfficientDetB5 All + EfficientDetB4 All + EfficientDetB5 All(With Pseudo) Public 0.7752</li>
</ol>
<h1>Solution</h1>
<h2>1st Phase</h2>
<p>EfficientDetB3 for test 10 images pseudo labeling<br>
Test image is "test source", I want to get test source knowledge</p>
<h2>2nd Phase</h2>
<p>I used EMA for the ensemble technique in 2nd phase.</p>
<ol>
<li>AdamW 100epoch 640 batchsize 4(for bn parameter turning)</li>
<li>AdamW 40epoch 1024 batchsize 1(bn layer freeze)</li>
</ol>
<p>use Cosine Annealing</p>
<p>Augmentation</p>
<ul>
<li>Mixup</li>
<li>Mosaic</li>
<li>Scale</li>
<li>Hue</li>
<li>Random Brightness</li>
<li>Cutout</li>
<li>GridMask</li>
</ul>
<h2>3rd Phase(Kernel)</h2>
<h3>Prediction</h3>
<p>EfficientDetB5 All + Predict MultiScale(768, 1024, 1280 * 4Flip) -> Pseudo Labeling of All test. </p>
<h3>Pseudo Labeling</h3>
<p>Traning Parameter as follows</p>
<ul>
<li>EfficientDet B5 image size 1024 </li>
<li>Epoch 5</li>
<li>Use EMA</li>
<li>Mixed Precision(AMP)</li>
</ul>
<h3>Ensemble</h3>
<p>EfficientDetB5 All + EfficientDetB4 All + EfficientDetB5 All(With Pseudo) using WBF</p>
<h1>Why Shakedown?</h1>
<ol>
<li>failed Threshold turning.(lower threshold is better in private) <br>
When setting threshold 0.1, the score go to 0.695. but I can not get selection.</li>
<li>Use more heavy augmentation for robustness </li>
</ol>
<p>Thanks! maybe the test will be public. I prepare training notebook(many junk codes...)</p>
<p>Thanks, my early plan have pseudo labeling experiment. <br>
because very long kernel time and get domain gap. <br>
but 0.7700 is my best public score w/o pseudo labeling </p>
<p>a little.
EfficientDetB5 All(With Pseudo) only Public 0.7747(+- 0.0001)</p>
<p>I have plan that will publish</p>
<p>No. It is detection threshold.</p> |
Global Wheat Detection | Thank you to all and rank "997th"'s approach | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Global Wheat Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I want to say thank you to all the people giving awesome notebooks with their very cool Insights.this was my first Kaggle competition so it is special for me. Before joining this competition I knew Object detection only by theory and I was "Keras Guy", this competition helped me to implement a full model written in Pytorch and I am very much happy. Those starter notebooks and discussions were really beneficial.<br>
I wanted to share my approach, although it's not that good( you can see the rank and tell). I faced some problems as I have a Laptop with only 4gb of RAM and 2gb GPU, I had to solely depend on Kaggle kernels, which had a limited time, doing it was a little hard for me.<br>
So here is my approach:</p>
<ol>
<li>I used 512X512 sized images to train a base FasterRCNN(resnet50) model using all the data and no data augmentation (30 epochs).</li>
<li>I used Stratified KFolds to make 5 folds of the data.</li>
<li>Each fold of data had the same augmentations including horizontal vertical flipping, Random Shadow, Gaussian Blur, Cutouts.</li>
<li>I used the base model to retrain on each fold now using 1024x1024 images with augmentations.</li>
<li>Lastly, I used WBF ensemble to get the result from each of the models.<br>
I will be really happy if you comment hoe could I make this model better or why didn't it worked.<br>
So this was In my part, I hope I can use the knowledge in other competitions, hope you guys are safe and safe and see you soon in other competitions. I am so eager to see top solutions!!<br>
Special thanks to <a href="https://www.kaggle.com/pestipeti" target="_blank">@pestipeti</a> <a href="https://www.kaggle.com/kaushal2896" target="_blank">@kaushal2896</a> <a href="https://www.kaggle.com/matthewmasters" target="_blank">@matthewmasters</a> for the notebooks and discussions and clearing my doubts.<br>
Thankyou!!!😀 </li>
</ol> |
Global Wheat Detection | Updated lb position 228 ,unselected 15th place | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Global Wheat Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>It is clearly known factor that there many noisy bbx in the train data unlike test data. <br>
Here are things that worked best for me in 10 days</p>
<p>1) Label correction of small bounding boxes by decreasing the x,y with 2 percent and increasing width ,height by factor of 1.5 . This would have enlarged very point size bbx to some visible level without affecting much accuracy of bboxes</p>
<p>2) Removing all bbx with area less than 500 after above correction</p>
<p>3) Pseudo label training with img size 1024 by removing the noisy train images and usign it along with test images which were far more accurate. </p>
<p>4) 512 img size model worked best on Public LB but it was 1024 that worked best on private LB,it is true in atleast my case.</p>
<p>These above were key things i could think of at best in last 10 days. <br>
I regret i dint select the submission of 1024 img size worked based on above observation and it apparently turned to be one scoring highest in Private lb but moderately on public lb. 67.13 unselected </p>
<p>Few other things due short of time i couldnt explore was Faster RCNN with mobile Net v2 , this was light model could have worked in this competition. </p>
<p>Congrats to all top positions..</p>
<p>May be if i had joined earlier could have been better placed.</p>
<p>Next Competition is OSIC…</p>
<p>thanks yeah.. sole objective was getting acquainted with obj detection which i never worked before ,just had super facial knowledge to it . </p>
<p>Here is unslected gold kernel <a href="https://www.kaggle.com/jaideepvalani/normal-pl-eff-tta-wbf-1024/data?scriptVersionId=40158023" target="_blank">https://www.kaggle.com/jaideepvalani/normal-pl-eff-tta-wbf-1024/data?scriptVersionId=40158023</a></p> |
Global Wheat Detection | Would-be 39th place solution with single model (yolov4) from a first time competitor | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Global Wheat Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h2>Introduction and thanks</h2>
<p>My team used both Effdet and YOLOv4 (though our final submissions were both Effdet). I focused on YOLOv4 so I will do a bit of a write up from my side. I started the competition about 2 weeks before the end after beginning learning machine learning in March. </p>
<p>Thanks to my team for supporting me throughout this competition even though I was underqualified and helping me debug. Thanks to Hawkey and NVNN for their YOLOv5 Pseudolabelling/OOF evaluation notebook, pabloberhauser for his notebook converting label formats to yolov4, and Mark Peng for precompiling Darknet binaries to work on the Kaggle enviornment. </p>
<h2>Training</h2>
<p>I used AlexeyAB's <a href="https://github.com/AlexeyAB/darknet" target="_blank">Darknet</a> to train. Using Colab Pro, the maximum image size I could get with reasonable speed was 704x704, batch size of 64, 32 subdivisions, and 94 epochs (6000 batches). All other parameters were the default given by Darknet, only adjusting filter, steps, etc. in order to train with 1 label. Training on Darknet was quite simple since it handles CV and augmentations automatically. However, labels need to be converted to txt files. I used <a href="https://www.kaggle.com/pabloberhauser/creating-label-files-for-use-in-yolov4" target="_blank">Pabloberhauser's notebook</a> to do this. This baseline alongside WBF over TTA gets a score of only about 0.7115 public and 0.6371 private. <a href="https://www.kaggle.com/stanleyjzheng/yolov4-wbf-inference?scriptVersionId=39789314" target="_blank">The WBF over TTA inference notebook is here</a>. I experimented lots with the training; I tried modifying anchors so that they were the same as yolov5, cutmix, but none of it worked as well as default on CV. </p>
<h2>Pseudolabelling</h2>
<p>I used NVNN's YOLOv5 notebook with Hawkey's OOF evaluation. For inference, I used <a href="https://github.com/Tianxiaomo/pytorch-YOLOv4" target="_blank">PytorchYOLOv4</a> to convert the .weights saved model generated by Darknet into a Pytorch file for inference. I had tons of issues with training pseudolabels however. PytorchYOLOv4 was not very robust, and gave me errors randomly. After a few PR and the deadline fast approaching, I tried using Darknet to train. Libcuda.so is inaccessible on Kaggle environments, so Darknet would not compile either. Many thanks for Mark Peng for precompiling Darknet for use with Kaggle. Finally, we used Darknet to train pseudolabels. </p>
<p>The precompiled Darknet did not have OpenCV, so we could not use mosaic. Therefore, we had to be very careful with overfitting to the pseudolabels. We used the same config as the training above, but only 800 batches (12 epochs). Then, we performed bayesian optimization and TTA over WBF for our final submissions. The best submission was 0.7406 on public LB and 0.6625 on private LB. </p>
<p><a href="https://www.kaggle.com/stanleyjzheng/yolov4pl-oof?scriptVersionId=40172709" target="_blank">Here is the notebook</a></p>
<h2>Final Remarks</h2>
<p>This methodology can definitely be improved. However, with how long it took to train a model (About 28 hours on a P100 on Colab Pro at 704x704 resolution) and the short timeline I had (2 weeks), I feel this is a pretty good solution. Once late submissions open up, I plan on testing other models as well as an ensemble with effdet. </p>
<p>On a more personal note, this whole competition has been a long journey that has felt like ages. I'm 16, and really turned myself around at the start of quarantine to begin learning machine learning. I discovered Kaggle only 3 months ago, and was super surprised by the community; the community is so generous with sharing knowledge and code. I feel so proud of getting my first medal in my first competition. I hope this write-up can be inspiring as notebooks from the likes of Chris Deotte and Alex Shonenkov. Thanks to everyone!</p>
<p>Thanks for the kind words! That's definitely a good point; I am not proficient enough to write my own pseudolabelling, OOF, etc. notebook from scratch (I'm currently working on ensemble from scratch in late submissions). It's definitely something for me to work towards, alongside building an intuition for hparam tuning, perhaps I will try to join the next competition earlier to not be influenced too much by the ideas of others.</p> |
Global Wheat Detection | [16th place solution] | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Global Wheat Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Yes, the two models are both FCOS + ATSS + GFL. Only the backbones are different. One of the model is ResNeSt-101, the other one is ResNeXt-101-dcnV2. </p>
<p>Thanks for your interest in our notebook. But the notebook is only inference code. We train the model in ourselves workstation. The code is based on Detectron2 and implemented by ourselves. If you are interested in ATSS and GFL, you can refer to MMdetection. It also implements these algorithms. </p>
<p>Sorry, we don't use MMdetection. We use Detectron2</p>
<p>Yes, you are right. We apply one of mosaic and mixup aug first, and then apply albumentations augmentations. </p>
<p>Yes, FCOS + ATSS + GFL. </p>
<p>Atss is based on fcos. The author makes the source code public. You can read it carefully. </p>
<p>Yes, we don't PL train dcnv2 in the final plan.</p>
<p>PL is trained on the whole test images for 4 epoch. Yes, it's on the same Notebook one by one with a batch size 4. </p>
<p>Thanks to the organizers for this interesting competition! Thanks to my teammates @vandesa003 @kevinzc @xiaoyushowyou2 @karkin96 . Congratulation to all!</p>
<h2>Summary</h2>
<ul>
<li>Data Augmentation: MultiScale, Mosaic, Mixup, RandomCrop, Cutout, Hue, Brightness, CLAHE, RGBShift, RandomGamma, ToGray, Horizontal and Vertical Flip, Rotate90 </li>
<li>Backbone: ResNeSt 101, ResNeXt 101 DCNv2</li>
<li>Neck: FPN </li>
<li>Head: ATSS</li>
<li>Loss: Quality Focal Loss + DIOU Loss </li>
<li>5x TTA (origin, HorizontalFLIP, VerticalFLIP, ROTATE90, Rotate270) + WBF </li>
<li>2 models ensemble with WBF</li>
<li>2 rounds Pseudo Label</li>
</ul>
<h3>1. Data Split</h3>
<p>We shuffle the train dataset and split to fixed training set and validation set from the beginning of the competition. About 260 images for validation, others are used for training. </p>
<h3>2. Model Training</h3>
<p>We train a ResNeSt 101 and a ResNeXt 101 DcnV2 model for model ensembling. The neck part I tried PAFPN and SEPC, but they didn't work. So we use FPN as the simple neck part. The model heads are both ATSS, because it's quite good performance. And we use the DIOU Loss for box regression and use Quality Focal Loss (<a href="https://arxiv.org/pdf/2006.04388.pdf">https://arxiv.org/pdf/2006.04388.pdf</a>) for classification. I also tried the Distributed Focal Loss in QFL paper. It worked well in COCO dataset. But it didn't work for this dataset. What's more, we adjust the anchor boxes from 8 * strides to 6 * strides. Because the boxes in this dataset are smaller than COCO according to the statistics. We use COCO pretrained models and traing with an input size of 1024, batch size 8, SGD optimizer, 36 epochs for both of these two models. </p>
<h3>3. Model Inference</h3>
<p>We use 5x TTA as listed above. The TTA results are merged by WBF. And the two models outputs are also merged by WBF. Besides, 2 rounds Pseudo Lable are applied for the ResNeSt model. We also tried to apply PL to the DcnV2 model at the same time. But the score droped down comparing with only RS model PL. We guessed that it's caused by the hyperparameters. There are too many inference hyperparameters, such as model score threshold, nms threshold, wbf threshold, pseudo label score threshold and so on. For avoiding over toggling in public test dataset, we only coarsely tuned it according to our knowledge and COCO using.</p>
<h3>4. Final Plan</h3>
<p>ResNeSt + DcnV2 Ensemble -> ResNeSt Pseudo Label training
-> ResNeSt' + DcnV2 Ensemble -> ResNeSt PL training
-> ResNeSt'' + DcnV2 Ensemble</p>
<p>Public Score: 0.7672
Private Score: 0.7125</p> |
Global Wheat Detection | [30th place solution] Weird TTA trick? Detect by quadrant provided a great boost! | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Global Wheat Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Lol I know what you mean. But hindsight is 20/20 - your first intuition might be right more often than wrong.</p>
<p>Hi all! It's been a real pleasure working alongside you during this competition. Before YOLOv5 came into the mix, I was sitting in 2nd place on the public leaderboard thanks to a neat little trick that gave me a bigger boost than expected.</p>
<p>I was working off an EffecientDetB5 model which gave around 0.72 on the public scoreboard without any TTA. With my TTA trick I got a massive boost to 0.7535. I'll explain it below, and would love to get your opinion on why it might have been so useful. Also, here is a <a href="https://www.kaggle.com/alexandersoare/gwd-detect-by-quadrant-tta-trick" target="_blank">notebook with a working example</a>.</p>
<p>The motivation was that I didn't have time to be training 1024x1024, but I still wanted to work with full resolution while training 640x640. So the idea was:</p>
<p>1) Break the test images up into 640x640 quadrants (so there would be some overlap between them).<br>
2) Do inference on all 4 quadrants and stitch the results together.<br>
3) Do inference on the full image resized to 640x640 (to help fill gaps - more on this later).<br>
4) Ensemble with WBF.</p>
<p>Here's a diagram explaining the quadrants + full image resized.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F4256010%2Ffd3e335bf416544a894ff550af5af5eb%2Fdiag1.png?generation=1596729863641670&alt=media" alt=""></p>
<p><strong>About the stitching</strong></p>
<p>I didn't want to keep any boxes that got too close to the edge of the quadrants because they could cut through the middle of a wheat head. So I discarded them.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F4256010%2Fdef474abb689561acb28b252830e14ca%2Fdiag2.png?generation=1596730533838943&alt=media" alt=""></p>
<p>And now you can see why I also did inference on the rescaled full image. I wanted to capture any boxes that may have been discarded with the method above.</p>
<p><strong>About the training</strong></p>
<p>I found that with this method, mosaic augmentation during training didn't help my score. I stuck with doing random resized crops of the training images. The size of the crops were uniformly distributed in [640, 1024]. That way I had the benefit of training near full resolution a lot of the time.</p>
<p><strong>Question to Kagglers</strong></p>
<p>I'm actually not familiar with the concept of multi-scale inference. I understand it's a TTA technique. What exactly is happening in it? Is it the similar to what I did?</p>
<p>Thanks! Something to try next time.</p>
<p>Thanks a lot for the insight! One question: do you know how this is different from the "multi scale" TTA used by some of the too kernels? </p> |
Global Wheat Detection | [340th place Solution] EfficientDet with PLx2 | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Global Wheat Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you😄 </p>
<p>Thanks for all the work done by the contributor of the notebooks community, thanks for their sharing, we really learned a lot from it, and we thanks for all the participant's work that makes the leaderboard more energetic!</p>
<p>Is not easy to get 340th place, we try to use a number of methods but hard to improve. <br>
Through our tireless efforts, we got 0.7666 on public and 0.6916 on private.<br>
These are some strategies for our final used submission.</p>
<h3>Data Cleaning</h3>
<ul>
<li>We deleted some boxes that squared wheat which cannot be seen very clearly and some boxes that appeared in the edge of the picture while squared nothing.</li>
<li>We also resize the boxes that are squared awn of wheat to only the wheat head. We did not resize all pictures because we concerned about the reduction of the predicted box may lead to a low percentage, for the IOU of predicted and ground truth > 0.5 got hit.</li>
</ul>
<p>Here is our dataset of cleaning markings. <a href="https://www.kaggle.com/dpyrtfq2372/pure-box" target="_blank">link to pure-box</a> Thanks to <a href="https://www.kaggle.com/songzhijin" target="_blank">@songzhijin</a> <a href="https://www.kaggle.com/luluqie" target="_blank">@Abusi Brown</a> They are my team member, thanks for their great effort and amazing work on the data cleaning. <br>
We used this tool to relabel the box. <a href="https://github.com/tzutalin/labelImg" target="_blank">Labelimg</a><br>
And I code this notebook for some format converting.<a href="https://www.kaggle.com/dpyrtfq2372/train-marking-to-pascalvoc/data" target="_blank">Train Marking(csv) to PascalVOC(xml)</a></p>
<h3>Training</h3>
<p>We used 1024x1024 images, Stratified 5 Folds Splitting, some Augmentation like others, on Efficientdet-D5.<br>
And we used apex.amp so that we can use batch-size=2 in 1024 to have a better loss on training.</p>
<h3>Inferencing</h3>
<p>We used 2 times of Pseudo Labeling, in the PLabeling we choose WBF iou_thr=0.2, skip_box_thr=0.33, each of them got training 3 epochs, the data consist of training data with our cleaned label and testing data with PLabel .<br>
In the final prediction, we used back to iou_thr=0.44, skip_box_thr=0.43 of WBF to get a more precise result.</p>
<p>The detection effect like this:<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F4931964%2Fe83af1b1abef9051fceb33e93a4f0eca%2F__results___61_0.png?generation=1596705036976428&alt=media" alt=""></p>
<p>Here is our notebook link. <a href="https://www.kaggle.com/dpyrtfq2372/efficientdet-with-double-psudo-labeling" target="_blank">Efficientdet-with-double-Pseudo-Labeling</a><br>
Thanks for your reading, I hope we can give you some inspiration through this page.</p>
<p>And I want to say in the end, thanks a lot to kaggle team, we really appreciate they managed to contain the rules of the competition and gave us the platform to communicate with others!</p> |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.