Title of Competition
stringclasses 343
values | Title of Writeup
stringlengths 6
139
| User
stringclasses 1
value | Writeup
stringlengths 419
80.4k
|
---|---|---|---|
Google Smartphone Decimeter Challenge | 5th place solution (Saito) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Smartphone Decimeter Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congrats to all the winners, and thanks so much for hosting such an interesting competition! Here, I want to share my part of our team's solution.</p>
<h1>Improved satellite positioning</h1>
<p>I spent the first month of the competition learning GNSS and developed my own positioning program. In my program, I tried the following ideas,</p>
<ul>
<li>least squares weighting based on satellite elevation</li>
<li>satellite elevation mask</li>
<li>outlier removal based on least squares residuals</li>
<li>isrbM averaging and regularization</li>
<li>height smoothing and regularization</li>
<li>carrier smoothing using accumulated delta range</li>
</ul>
<p>However, most ideas became ineffective after the subsequent optimization based smoothing. Perhaps the only essential improvement was carrier smoothing based on the accelerated delta range.</p>
<h1>Use of sensor data</h1>
<p>Based on acceleration, gyro, and geomagnetic sensors, I estimated vehicle orientation and acceleration, and added acceleration and lateral velocity terms to the cost function of subsequent optimizations. This idea may have had little effect after the introduction of velocity estimates.</p>
<h1>Velocity estimation using doppler shift</h1>
<p>The vehicle speed was estimated from the pseudo-range rate by the least squares method and used for subsequent optimization based smoothing. This improved the public LB by about 1 meter.</p>
<h1>Optimization based smoothing</h1>
<p>This part is the core of my solution.</p>
<p>First, the dynamics of the vehicle is expressed by the following state equation.<br>
<img src="https://user-images.githubusercontent.com/309785/128366897-369cc75b-46e9-419b-8f35-d0e884e9ff27.png" alt="state_equation"><br>
where \( \phi \) and \( \psi \) are latitude and longitude, respectively.</p>
<p>Then, by vertically arranging the states \( X \) and inputs, jerks, \( U \) at each time, the equality constraints on state transitions can be described as follows,</p>
<p>[<br>
A X + B U = 0<br>
]</p>
<p>In addition, by determining the observation matrix \( C \) according to the observation time based on Hermitian interpolation, the vector \( Y \), which is a vertical sequence of latitude and longitude at each observation time, can be written as follows,</p>
<p>[<br>
Y = C X<br>
]</p>
<p>The optimization evaluation function \( J \) is set as the sum of the squares of the jerk \( U \) and the position estimation error, as follows,<br>
[<br>
J = U^T R U + (\hat{Y} - C X)^T L (\hat{Y} - C X)<br>
]</p>
<p>Finally, the position estimation is formulated as the following quadratic programming problem.<br>
<img src="https://user-images.githubusercontent.com/309785/128366891-08f79ee0-881e-4a6a-b9a5-887270cc83fb.png" alt="optimization"></p>
<p>Based on this method, various information can be integrated by adding velocity error, acceleration error, cost related to lateral velocity, etc. to the cost function.</p>
<p>The important point is to perform the above optimization by using positions of all smartphones in one driving data at the same time. This method is clearly more accurate than the method of making estimates for each smartphone and calculating their average.</p>
<h1>Map matching</h1>
<p>This process is applied only SJC/downtown.</p>
<ul>
<li>If the shortest distance between the GNSS position prediction result and the ground truth is more than a certain value, the prediction value is excluded in advance before smoothing.</li>
<li>Replace the smoothing result with the nearest neighbor point of ground truth and perform smoothing again.</li>
</ul>
<h1>Bias correction</h1>
<p>Because the smartphones are placed in the vehicle, I thought that the effect of multipath would appear asymmetrically and the positioning result would be biased with respect to the direction of the vehicle. Based on this idea, our team added a process to correct the position according to the direction the vehicle is traveling. This method contributed to the improvement of CV and public LB, but did not improve the private LB score.</p>
<h1>Do not use public LB for validation</h1>
<p>In the last indoor location competition, my team wasn't able to implement drastic improvement ideas such that discrete optimization towards the end of the competition due to poor post-processing validation. In this competition, I tried to minimize the number of submits by completing all parameter tuning and validation locally. As a result, our team was able to stay in the gold medal range even if a shake up occurred.</p>
<h2>notebooks</h2>
<ul>
<li><a href="https://www.kaggle.com/saitodevel01/gsdc-optimization-based-smoothing-1st-version" target="_blank">GSDC - Optimization based smoothing (1st version)</a></li>
<li><a href="https://www.kaggle.com/saitodevel01/gsdc-vehicle-speed-estimation-by-doppler-shift" target="_blank">GSDC - Vehicle Speed Estimation by Doppler Shift</a></li>
<li><a href="https://www.kaggle.com/saitodevel01/gsdc-improved-raw-gnss-baseline" target="_blank">GSDC - Improved Raw GNSS Baseline</a></li>
<li><a href="https://www.kaggle.com/saitodevel01/dsdc-unified-post-processing" target="_blank">DSDC - Unified post-processing</a></li>
<li><a href="https://www.kaggle.com/saitodevel01/gsdc-bias-eda" target="_blank">GSDC - Bias EDA</a></li>
<li><a href="https://www.kaggle.com/saitodevel01/gsdc-bias-correction" target="_blank">GSDC - Bias Correction</a></li>
</ul> |
Google Smartphone Decimeter Challenge | 6th Place Solution (shimacos part) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Smartphone Decimeter Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I would like to thank hosts for organizing such a unique competition!<br>
And in this competition, I’m able to become a Grand Master and my teammate <a href="https://www.kaggle.com/fuumin621" target="_blank">@fuumin621</a> is able to become a master!<br>
Here is an introduction to my part of our brief solution.<br>
It was important for us to use AccumulatedDeltaRange properly and look at the data carefully..</p>
<h1>Preprocess</h1>
<p>I predicted the anomaly (distance from ground truth) and speed from IMU data and baseline data by lightGBM.<br>
Then, I replaced anomalies above the threshold with NULLs and linearly interpolated between them.</p>
<h1>Kalman smoothing</h1>
<p>I applied two pattern Linear Kaman Smoothing.<br>
One takes into account velocity, and the other takes into account acceleration (i.e. 4D and 6D).<br>
Also, since the interval of the observed values was not constant, I replaced delta_t in the state transition matrix for each update step.<br>
I tried making the observation noise larger in downtown, predicting the yaw rate, and using a nonlinear Kalman filter, but it didn't work.</p>
<h1>AccumulatedDeltaRange</h1>
<p>I knew that phase observation was important, so I tried to improve the baseline using the AccumulatedDeltaRange from the satellite data, but it did not work due to lack of knowledge…<br>
So I decided using the <a href="https://developer.android.com/guide/topics/sensors/gnss" target="_blank">GNSS Analysis app from google</a>.<br>
By using this tool, I was able to obtain observations using the AccumulatedDeltaRange for some of the data.<br>
The observation data itself was difficult to use directly because it contained bias, but the relative distances obtained by taking the differences between them were very accurate like following image. (delta_adr = ADR(t) - ADR(t-1))<br>
<img src="https://user-images.githubusercontent.com/24289602/128328750-1217fc5d-5280-42bd-be8b-0adf2485f3af.png" alt="image"><br>
I used this to implement a forward hatch filter and a backward hatch filter and averaged them after applied Kaman smoothing.</p>
<pre><code> prev_smooth = lat_lngs[0]
res = [prev_smooth]
for raw, delta_adr, delta_t in zip(
lat_lngs[1:, :], delta_adrs[1:, :], delta_ts[1:]
):
if not all(np.isnan(delta_adr)):
smooth = 1 / M * raw + (M - 1) / M * (
prev_smooth + delta_adr * delta_t
)
else:
smooth = raw
prev_smooth = smooth
res.append(prev_smooth)
res = np.array(res)
</code></pre>
<p>There is a parameter called M, so I optimized it using GroupKFold for each area. (Find the parameter that is the smallest in the train data and verify it with valid data.)<br>
Smoothing with this hatch filter was very effective.<br>
There were many missing delta_adr values for some phones, so we filled them with values from other phones.</p>
<h1>Weighted phone mean</h1>
<p>After applied hatch filter, I weighted average latitude and longitude using the weights of phone optimized by GroupKFold as well as M.<br>
Some of the <code>millisSinceGpsEpoch</code> had a little gap depending on the phone, so I used linear interpolation to fill it.</p>
<h1>Snap2grid</h1>
<p>In the downtown area, I performed a discrete optimization using linearly interpolated ground truth.<br>
To prevent overfitting, the ground truth in itself was not included in the target to be snapped.<br>
The objective function is as follows, and I used greedy search to find the optimal path.</p>
<pre><code>cost = distance2cur + distance2prev * alpha
</code></pre>
<h1>Stay point mean</h1>
<p>Using the predicted value of the speed of lightgbm, I took the average for points below the threshold.</p>
<h1>Ensemble</h1>
<p>Using various parameters, I took the average of them and the CV was about 2.149.<br>
Finally, after adding other members' models and post-processing, we got CV 2.0569 and LB 2.285.<br>
In Private, I knew that there was a lot of weight on highways and trees, so I decided on the ensemble weights taking them into account.</p> |
Google Smartphone Decimeter Challenge | 7th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Smartphone Decimeter Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I would like to thank host for organizing this competition. I participated in the competition as a soloist from start to finish, and although it was a very tough competition, but it was very meaningful as it gave me a chance to experience the interesting technology of GNSS. </p>
<p>Here is my solution. </p>
<h1>1.Baseline improving</h1>
<p>I have rebuilt the baseline based on <a href="https://www.kaggle.com/hyperc/gsdc-reproducing-baseline-wls-on-one-measurement" target="_blank">this notebook</a>. <br>
For isrbm, I used the median value for each phone-sat. </p>
<h2>1-1.Selecting a satellite</h2>
<p>I improved the baseline by excluding satellites, which are a source of error, from the least-squares calculation. </p>
<p>For the filter condition, I mainly used the elevation angle. Signals from satellites with low elevation angles are excluded because they are strongly affected by various errors. </p>
<h2>1-2.Carrier smoothing</h2>
<p>Pseudorange smoothing with Acumulated Delta Range (ADR), as described in <a href="https://www.kaggle.com/gymf123/onepager-tip-acumulated-delta-range-adr" target="_blank">this notebook</a>. The original pseudorange, and the previous pseudorange + ADR Diff mixed in a certain ratio to form the final pseudorange. ADR is relative but accurate, so it can be combined with the absolute value of pseudorange to improve the accuracy. Since ADR can have an accumulated value of zero due to cycle slip, I applied carrier smoothing only when AccumulatedDeltaRangeState = 25. </p>
<h1>2.Estimation of relative position</h1>
<h2>2-1. Vehicle speed calculation using doppler shift</h2>
<p>The relative velocity between the satellite and the vehicle can be determined by the frequency change of the signal (doppler shift). To determine the speed of a vehicle, we need the position of the satellite, the position of the vehicle, the speed of the satellite, the distance between the satellite and the vehicle, and the doppler shift. The position and speed of the satellite are given as data. The position of the vehicle and the distance between the satellite and the vehicle are obtained from the calculation results of Baseline improving (Need to subtract clkbias from pseudorange). The doppler shift is given as PseudorangeRateMetersPerSecond. The vehicle speed is then calculated by the least squares method using information from multiple satellites. </p>
<p>The vehicle speed (and the relative position calculated from it) obtained by this method was very accurate, and was a major factor in improving the score. </p>
<h2>2-2. ML prediction (add IMU data)</h2>
<p>Since the relative positions obtained in 2-1 are missing in some places, I also combined IMU sensor data to create a machine learning model to supplement them. I built a prediction model in lightGBM with lag and rolling features of the IMU and vehicle velocity.</p>
<h1>3. Reject outlier</h1>
<p>There are some outliers in the baseline, which I will remove.</p>
<h2>3-1. Abnormally high speeds</h2>
<p>Exclude points that have a very large distance from the previous and next point. </p>
<h2>3-2. Based on ground truth</h2>
<p>Since some areas have overlapping test and train paths, I were able to use the ground truth of the train to determine the outlier. The closest distance to the ground truth data was calculated for each point, and those above the threshold were removed as outlier. </p>
<h2>3-3. Based on reference point calculated by relative position</h2>
<p>Since there are many test data that have paths that do not exist in train, the 3-2 method can only be used in a very limited way. To solve this problem, I created a reference point that can be used as an alternative to ground truth. <br>
For this, I used the relative positions calculated in 2. Starting from the coordinate point at each time, the coordinates before and after a certain time are calculated based on the accumulated relative values. By sliding this process at each point in time, a large number of estimates can be obtained at each time. The accuracy of these estimates is highly dependent on the accuracy of the absolute coordinates of the starting point. If the starting point is an outlier, the estimated value will also be an outlier, but this is not frequent and the effect can be eliminated by clipping the estimated value. We then calculated a threshold value from the mean and standard deviation of the estimated values at each point, and used it to remove the outlier values. </p>
<h1>4. Post process</h1>
<h2>4-1. kalman smoothing</h2>
<p>I used <a href="https://www.kaggle.com/emaerthin/demonstration-of-the-kalman-filter" target="_blank">this notebook</a> as is. </p>
<h2>4-2. Processing the speed0 period</h2>
<p>As discussed in <a href="https://www.kaggle.com/t88take/gsdc-eda-error-when-stopping" target="_blank">this notebook</a>, there is a tendency for absolute coordinates to be highly scattered when the car is stopping. To solve this problem, I created a model to predict stops, and replaced the continuous periods predicted as stops with the average of those data. </p>
<h2>4-3. Cost minimization</h2>
<p>I used the <a href="https://www.kaggle.com/saitodevel01/indoor-post-processing-by-cost-minimization" target="_blank">cost_minimization notebook</a>. Improved the accuracy of the absolute coordinates based on the relative position obtained in 2. </p>
<h2>4-4. Position shift</h2>
<p>I used <a href="https://www.kaggle.com/wrrosa/gsdc-position-shift" target="_blank">this notebook</a> as is.</p>
<h2>4-5. Weighted phones mean</h2>
<p>As described in <a href="https://www.kaggle.com/t88take/gsdc-phones-mean-prediction" target="_blank">this notebook</a>, this process averages the values of multiple phones in the same collection. It has been improved from the published version and changed to add weight to each phone model. </p>
<h1>Area grouping</h1>
<p>I added a little logic to the kNN introduced <a href="https://www.kaggle.com/columbia2131/area-knn-prediction-train-hand-label" target="_blank">here</a>, and implemented grouping based on the degree of path matching with the train. Each collection was divided into five groups, and the hyperparameters and order of processing were adjusted for each group. </p> |
Google Smartphone Decimeter Challenge | 8th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Smartphone Decimeter Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h2>Reproduced baseline</h2>
<p>As my initial predictions I used approach similar to the one described <a href="https://www.kaggle.com/c/google-smartphone-decimeter-challenge/discussion/238583" target="_blank">in this discussion</a> with one additional trick. I fitted WLS a few times, each time removing measurements with the biggest residuals. My reproduced baseline had slightly better CV score than the one provided by the host.</p>
<h2>Improving baseline using relative positions</h2>
<p>This part consisted of three steps:</p>
<ul>
<li>I computed velocities in ENU coordinate system between consecutive timestamps using Acummulated Delta Range, pseudoragnes, sattelite positions and reproduced baseline. After that, I removed outliers and applied smoothing to predictions</li>
<li>Then I trained convolutional neural network using velocities from previous step, IMU data and baseline, which also predicted velocities in ENU coordinates. Model achieved mean absolute error around 0.25m/s .</li>
<li>As a last step, I applied Weighted Least Squares to combine relative predictions with data from derived files. For each timestamp it solved the following optimization problem</li>
</ul>
<p>$$ argmin_{x, b_{-k}, …, b_k} \sum_{i=-k}^k \sum_{j=1}^{n_i} w_{ij} (|| s_{ij} - \Delta x_i - x|| + b_i -p_{ij})^2 $$</p>
<p>where </p>
<ul>
<li>\( k \) - parameter controlling size of the window. </li>
<li>\( x \) - phone position at timestamp \( t \)</li>
<li>\( b_i \) - phone clock bias at timestamp \( t + i \)</li>
<li>\( n_i \) - number of pseudorange measurements at timestamp \( t + i \)</li>
<li>\( w_{ij} \) - inverse of \( j \)-th pseudorange measurement uncertainty multiplied by \( 1 + 0.25|i| \) </li>
<li>\( s_{ij} \) - \( j \)-th satellite position at timestamp \( t + i \) </li>
<li>\( \Delta x_{i} \) - model's prediction of relative position between timestamp \( t + i \) and \( t \) </li>
<li>\( p_{ij} \) - corrected \( j \)-th pseudorange measurement at timestamp \( t + i \)</li>
</ul>
<p>Parameter \( k = 10 \) worked best for me, so the window size was 21. This model had 24 unknowns - 3 for position and 1 for clock bias for each timestamp. It had 6 times more parameters than WLS for single timestamp, but 21 times more measurements. The downside was, that any errors in relative positions affected performance of this solution.</p>
<h2>Improving baseline in downtown area</h2>
<p>As all the road segments from downtown areas in the test data were also present in ground truth data, I applied the following algorithm for each timestamp:</p>
<ul>
<li>select all points from ground truth file closer to the baseline prediction than 30 meters - let's call them candidate points</li>
<li>for each candidate point \( x_j \) estimate clock bias: \( b_j = \mathrm{median}_i ( || s_i - x_j || - p_i ) \)</li>
<li>find \( k \) points with the lowest values of <br>
$$ \sum_{i=1}^{n} w_i | || s_i - x_j || - b_i - p_i | $$<br>
and return their medoid.</li>
</ul>
<p>It was important to use median instead of mean to compute clock bias and to use absolute value of residuals instead of squares, because it helped with very noisy measurements in downtown area.</p>
<h2>Postprocessing</h2>
<p>After that, I applied the following steps to improve predictions:</p>
<ul>
<li>Error correction model - it predicted the difference between current predictions and ground truth. It was simple 1D Conv Neural Network. Features for this model were based on current predictions and imu.</li>
<li>Aggregating predictions when speed is zero.</li>
<li>Aggregating predictions from different phones.</li>
</ul> |
Google Smartphone Decimeter Challenge | 9th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Smartphone Decimeter Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First, I would like to thank hosts for organizing an exciting competition.</p>
<p>Here, I introduce my solution.<br>
My solution exploits the velocity of the vehicle predicted using doppler shift, which is basically much more reliable than the baseline positions.</p>
<h3>1. Velocity prediction using doppler shift.</h3>
<p>Velocity of the vehicle can be estimated using PseudorangeRateMetersPerSecond, as show here: <a href="https://www.kaggle.com/c/google-smartphone-decimeter-challenge/code" target="_blank">https://www.kaggle.com/c/google-smartphone-decimeter-challenge/code</a></p>
<h3>2. Smoothing by CNN.</h3>
<p>A simple 1D convolutional network is applied to the predicted velocity for smoothing.</p>
<h3>3. Extract reliable points in the baseline.</h3>
<p>I extracted reliable points in the baseline based on consistency between the velocity and relative position.<br>
The consecutive points in the baseline are grouped if the following conditions are satisfied:<br>
<img src="https://i.postimg.cc/vBYch2KT/consistent-group.png" alt=""><br>
where</p>
<ul>
<li><strong>x</strong>t: the baseline position at a time index t,</li>
<li><strong>v</strong>t: the velocity at a time index t,</li>
<li>ts, te: time indices of the start and end points of the consistent group,</li>
</ul>
<h3>3. Apply snap-to-grid to consistent groups(SJC only).</h3>
<p>Snap-to-grid is applied to the consistent groups.<br>
Grid points are generated by the linear interpolation of the ground truth.<br>
In order to avoid snapping to an opposite side of road, the orientation degree of the vehicle is added to each grid point.<br>
The optimal grid points are obtained by solving the following optimization problem:<br>
<img src="https://i.postimg.cc/5yJsNCnD/snap-to-grid.png" alt=""><br>
where</p>
<ul>
<li>gt: an index of the grid point to which <strong>x</strong>t is snapped,</li>
<li><strong>u</strong>gt: an unit vector aligned to the orientation degree of the vehicle at the grid point.</li>
</ul>
<p>The first term of g(<strong>x</strong>t, <strong>v</strong>t, <strong>x</strong>gt, <strong>u</strong>gt) is the cost related to the distance between the baseline position and the grid points.<br>
The second term of g(<strong>x</strong>t, <strong>v</strong>t, <strong>x</strong>gt, <strong>u</strong>gt) is the cost for the deviation between the orientation degree of the velocity and that of the grid points.<br>
The above optimization problem can be solved using Dijkstra's algorithm.</p>
<h3>4. Interpolation based on the velocity.</h3>
<p>The baseline positions in the consistent groups are fixed.<br>
By fixing them, boundary conditions are imposed to the points between the consistent groups.<br>
In order to satisfy the boundary conditions, the velocity is modified by solving the following optimization problem:<br>
<img src="https://i.postimg.cc/HkhjMJVN/interpolation-based-on-velocity.png" alt=""><br>
where</p>
<ul>
<li>\sigma(|<strong>v</strong>t|) is the standard deviation depending on the norm of <strong>v</strong>t, which is estimated using the train set,</li>
<li>t^{e}_{c}: the time index of end point of the c-th consistent group,</li>
<li>t^{s}_{c+1}: the time index of start point of the (c+1)-th consistent group.</li>
</ul>
<p>Then, the positions between the consistent groups are interpolated as follows:<br>
<img src="https://i.postimg.cc/WbPTZN9n/interpolation-based-on-velocity2.png" alt=""></p>
<h3>5. Apply snap-to-grid to the interpolated points(SCJ only).</h3>
<p>Snap-to-grid are applied to the interpolated points in almost the same way for the consistent groups(Boundary conditions are modified).</p>
<h3>6. Smoothing based on the velocity.</h3>
<p>The positions are smoothed by iteratively solving the optimization problem in "4. Interpolation based on the velocity" as shown below:<br>
<img src="https://i.postimg.cc/ht1RFNq1/post-smooth.png" alt=""></p>
<h3>7. Ensemble</h3>
<ul>
<li>I used several conditions to generate the consistent groups and averaged them.</li>
<li>The positions of phones in the same collection are averaged with the same weight.</li>
</ul>
<h3>8. Result</h3>
<p>Train: 2.331 / Public: 3.758 / Private: 2.597</p>
<h3>9. Remaining issues</h3>
<ul>
<li>Use IMU data to reduce prediction errors of the velocity.</li>
<li>Detect stop points and contract them to one point to reduce accumulation errors. <br>
Although the boundary conditions imposed by the consistent groups reduce the accumulation errors, stop points may cause a big problem to the interpolation by the velocity.</li>
<li>For further improvement, the baseline positions must be improved.<br>
I will try it by studying other solutions.</li>
</ul> |
Google Smartphone Decimeter Challenge 2022 | 15-th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Smartphone Decimeter Challenge 2022
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>We join the competition 10 days before the deadline, so the decision is simple and based on public notebooks:<br>
1) Get positions following instructions from this notebook: <a href="https://www.kaggle.com/code/timeverett/getting-started-with-rtklib-py" target="_blank">https://www.kaggle.com/code/timeverett/getting-started-with-rtklib-py</a> (rtk)<br>
2) Get positions, speeds, and covariances from this notebook: <a href="https://www.kaggle.com/code/taroz1461/carrier-smoothing-robust-wls-kalman-smoother" target="_blank">https://www.kaggle.com/code/taroz1461/carrier-smoothing-robust-wls-kalman-smoother</a> (rwls)<br>
3) Apply Kalman filter to rtk and rwls positions separately: <a href="https://www.kaggle.com/code/taroz1461/carrier-smoothing-robust-wls-kalman-smoother" target="_blank">https://www.kaggle.com/code/taroz1461/carrier-smoothing-robust-wls-kalman-smoother</a><br>
4) Apply kalman filter to raw rtk positions again: <a href="https://www.kaggle.com/code/tqa236/kalman-filter-hyperparameter-search-with-bo" target="_blank">https://www.kaggle.com/code/tqa236/kalman-filter-hyperparameter-search-with-bo</a><br>
5) Calculate and correct biases per phone following this notebook: <a href="https://www.kaggle.com/code/saitodevel01/gsdc-bias-correction" target="_blank">https://www.kaggle.com/code/saitodevel01/gsdc-bias-correction</a>, <a href="https://www.kaggle.com/code/saitodevel01/gsdc-bias-eda" target="_blank">https://www.kaggle.com/code/saitodevel01/gsdc-bias-eda</a><br>
6) Shift all positions to median difference with ground truth calculated on the train<br>
7) Merge results with median and averaging with weights</p>
<p>Some of the submissions hit the gold area but unfortunately, I choose other ones.<br>
Thanks for the competition and for the great notebooks!</p> |
Google Smartphone Decimeter Challenge 2022 | 15th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Smartphone Decimeter Challenge 2022
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>We are new to Kaggle competitions, and it's the first time for us to earn silver medal.<br>
We had learned a lot of things related to GNSS analysis for first a month after joined this competition because we didn't know anything about it. It has been quite enjoyable to know how to determine precise positions using GNSS satellites discovered by the receiver.<br>
We would like to thank much the hosts for having this exciting competitions and the participants for sharing great notebooks, especially <a href="https://www.kaggle.com/taroz1461" target="_blank">@taroz1461</a> and <a href="https://www.kaggle.com/timeverett" target="_blank">@timeverett</a>.</p>
<h1>Overview</h1>
<ul>
<li>Annotating every drive route with one of 'Highway', 'Tree' or 'Downtown'</li>
<li>Optimization of Satellite-Selection Criteria by Optuna</li>
<li>When Stopping, 3x Multiplication of Estimated-Position Covariance passing into Kalman Smoother</li>
<li>Robust WSL to determine Positions and Velocities and Kalman Smoother, provided in <a href="https://www.kaggle.com/code/taroz1461/carrier-smoothing-robust-wls-kalman-smoother" target="_blank">this notebook</a> by <a href="https://www.kaggle.com/taroz1461" target="_blank">@taroz1461</a></li>
<li>RTKLIB to determine Positions, provided in <a href="https://www.kaggle.com/code/timeverett/getting-started-with-rtklib" target="_blank">this notebook</a> by <a href="https://www.kaggle.com/timeverett" target="_blank">@timeverett</a></li>
</ul>
<p>We will introduce first 3 topics in this discussion. (Skipped the others that look duplicate in the other top-place solutions)</p>
<h2>Drive-Route Annotations</h2>
<p>We have investigated and annotated every drive route using Google Earth Pro, following the manner shown in <a href="https://www.kaggle.com/competitions/google-smartphone-decimeter-challenge/discussion/245160" target="_blank">the discussion</a> of the last competition by <a href="https://www.kaggle.com/taroz1461" target="_blank">@taroz1461</a>.<br>
The distribution of these annotations is in the following.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F10311266%2F19cda438589b2d2bcbb78dfa6b78ad8e%2Fmost-occurred-label-dist.png?generation=1659367406615919&alt=media" alt=""></p>
<h2>Optimization of Satellite-Selection Criteria</h2>
<p>All the satellite-selection criteria is determined for each drive-route annotation in optimization by Optuna.<br>
We thought the satellite-selection criteria should change in different environment situations, so apply optimization for every 3 drive-route annotations.<br>
Finally, we got improving public LB score 2.64 from 2.65 a little without kalman smoother (2.132 from 2.140 in private LB score).</p>
<p>But unfortunately, we noticed we can get private score 1.784 with the annotation-independent postprocessing after the competition finished.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F10311266%2F365f821b9687c7063335b88c2378dee9%2Fbest_score_on_late_submission.png?generation=1659365999974204&alt=media" alt=""></p>
<h2>When Stopping, 3x Multiplication of Estimated-Position Covariance</h2>
<p>Our visualization of drive route tells us that the fluctuations of the estimated positions look significantly large on stopping, particularly in 'Tree' or 'Downtown'.<br>
We simply multiply 3 times of position covariance when the speed of the receiver is less than 1 meter per a second for drive-route annotations except for 'Highway', which led the public score to 2.38 from 2.41 (1.863 from 1.904 in the private score).</p> |
Google Smartphone Decimeter Challenge 2022 | 1st Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Smartphone Decimeter Challenge 2022
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>The ADR observations were processed using the following procedure.</p>
<p>(1) exclude invalid ADRs from the <code>AccumulatedDeltaRangeState</code><br>
(2) exclude cycle slips of ADR from <code>diff(ADR)-PseudorangeRate*dt</code></p>
<p>Note that (2) includes receiver clock jumps. (2) values are shown in the following figure. As a threshold, I used 1.5 m. When Doppler's accuracy is poor, ADR is almost never obtained, so this will eliminate a significant amount of cycle slips. Of course, this still leaves half-cycle slips and outliers. However, the M-estimator can almost eliminate the influence of these outliers.<br>
<img src="https://user-images.githubusercontent.com/7933764/182264555-a2fe4adf-65af-4047-9de5-5375025cd7df.png" alt="fig5"></p>
<p>I am very happy to have won first place in this competition again. I am a researcher in the GNSS and robotics fields, but last year's competition was very inspiring, with many different people working on ideas that I would never have thought of. I would be very happy if you could also share your solutions and ideas.</p>
<p>Smartphone's GNSS observations are very noisy, and the output data has many missing and abnormal values, making it very difficult to estimate positions accurately. I also had difficulty dealing with the differences in observations for different types of smartphones. There were a lot of details to deal with and a lot to do, and I think this competition was unusual and tough for many of us.</p>
<h1>Overview</h1>
<p>The basic idea of my solution is the same as <a href="https://www.kaggle.com/competitions/google-smartphone-decimeter-challenge/discussion/262406" target="_blank">last year's</a>: global optimization of position and velocity using pseudorange, pseudorange rate (Doppler), and ADR (carrier phase) by Factor Graph Optimization (FGO). For more on FGO, see last year's solution. The key point of this year's method, which contributed to the accuracy, was to separate the velocity and position estimation and process them in two stages. The flow of the proposed method is shown in the following figure. It has a very simple structure.<br>
<img src="https://user-images.githubusercontent.com/7933764/182075782-be8d5e1c-992b-4b5f-8c69-5426bb6cf222.png" alt="fig1"></p>
<h2>Input</h2>
<ul>
<li><p><code>gnss_log.txt</code></p></li>
<li><p>GNSS observation data and satellite orbit data (broadcast ephemeris) of base station</p>
<ul>
<li>For the GNSS base station, I used <a href="https://geodesy.noaa.gov/CORS/" target="_blank">NGS CORS</a> data at 30-sec intervals. I used a single base station, but I thought <a href="https://www.kaggle.com/saitodevel01" target="_blank">@saitodevel01</a>'s method of the ensemble of multiple base station analysis results might contribute to accuracy. I should have tried it… </li></ul></li>
</ul>
<h1>Velocity estimation stage</h1>
<p>Accurate relative position (velocity) can be calculated from ADR time differences, but availability is low because of vulnerability to signal shielding and frequent cycle slips. The velocity is estimated first using Doppler, which is more robust, and then outliers are removed, interpolated, and used as a loose constraint in the next stage of position estimation. At the same time, it can be used to determine that a vehicle is stopped. If velocity is included in the optimization at the same time as position, such interpolation and determination of the stop position become difficult. In the case of data with many missing outliers, elevations, tunnels, etc., it was very effective to estimate and correct the velocity at a separate stage.<br>
<img src="https://user-images.githubusercontent.com/7933764/182075787-50a13c78-2662-4729-b4ab-04af3d609e85.png" alt="fig2"></p>
<h1>Position estimation stage</h1>
<p>Position and receiver clock are estimated by optimizing a graph constructed using pseudorange corrected for bias components using a base station, ADR time difference, and the velocity estimated in the previous stage. For city driving data, where there are many pauses due to traffic signals, the observed pseudorange data at stops are eliminated from the velocity estimated in the previous stage. Between each state (position), the estimated velocity is used to constrain the relative position, and a highly accurate relative position constraint is added when ADR is valid. In addition, as an absolute position constraint, the pseudorange corrected by the base station is used. No post-processing is done and the position estimated by optimization is submitted as the final estimate.<br>
<img src="https://user-images.githubusercontent.com/7933764/182075795-f044b3cc-ed7f-493a-82d0-aea6fdbb7a73.png" alt="fig3"></p>
<h1>Optimization</h1>
<p>As in the previous year, <a href="https://github.com/borglab/gtsam" target="_blank">GTSAM</a> were used. This time, we used a simple M-Estimator for robust optimization instead of Switchable Constraint, which we used last year. Switchable Constraint has the advantage of being able to visualize and tune the values of the weights, but it has the disadvantage of low convergence and processing time. Since the performance was almost the same, I used the M-Estimator with the <a href="https://xipengwang.github.io/paper-reading-mixture-models-for-least-square-optimization/#Huber-Loss" target="_blank">Huber function</a>, which is suitable for noisy smartphone GNSS observations because the weights are not completely zero. The processing time was about 30 minutes for the entire test data run, allowing for a lot of trial and error in a limited amount of time.</p>
<h1>Tips</h1>
<ul>
<li><p>The <code>device_gnss.csv</code> has missing pseudoranges, so it was more accurate to convert the pseudoranges from <code>gnss_log.txt</code> by myself.</p></li>
<li><p>As <a href="https://www.kaggle.com/saitodevel01" target="_blank">@saitodevel01</a> <a href="https://www.kaggle.com/competitions/smartphone-decimeter-2022/discussion/340692" target="_blank">pointed out</a>, there is a problem with Doppler timing misalignment in the XiaomiM8. Apparently, only the Doppler of the XiaomiM8 matches the Android system clock as well as the IMU, not the GPS clock. As noted in the <a href="https://www.kaggle.com/competitions/smartphone-decimeter-2022/discussion/323135" target="_blank">discussion</a>, the velocity was best estimated when an offset of 600 ms was added. </p></li>
<li><p>The SamsongGalaxyS20 and XiaomiMi8 differ significantly from the other phones in terms of pseudorange rate and ADR uncertainty provided by the phone. I used a satellite elevation angle-based error model instead of the uncertainty provided by the phone.</p></li>
<li><p>With respect to the interpolation method for the velocity, the <a href="https://www.mathworks.com/help/matlab/ref/makima.html" target="_blank">Akima</a> interpolation method was more accurate than using spline interpolation. Spline interpolation is not good for overshooting.</p></li>
<li><p>When the <code>HardwareClockDiscontinuedCount</code> flag changes, cycle slips occur, and ADRs become discontinuous. This occurs only in some runs on Pixel4/4XL. Under these circumstances, ADRs are rarely available.</p></li>
</ul>
<h1>Ideas that were tried and did not work</h1>
<ul>
<li><p>Velocity estimation with accelerometer integration. I spent quite a bit of time, but as a result, I did not contribute to accuracy.</p></li>
<li><p>Estimation of M-estimator hyper parameter by learning. Tried learning and dynamically applying the appropriate Huber function parameters according to the residuals of the observations but could not go beyond tuning the fixed parameters individually for each run.</p></li>
<li><p>A method that optimizes the distance from each satellite individually. This would be so-called tightly coupled, which optimizes the distance and distance rate from each satellite instead of optimizing the position and velocity, but it did not work. I would like to try this as a research project because I feel it has potential.</p></li>
</ul>
<h1>Other views on the competition</h1>
<ul>
<li><p>It is regrettable that, as mentioned in <a href="https://www.kaggle.com/competitions/smartphone-decimeter-2022/discussion/337416" target="_blank">this discussion</a>, the mistake in the ground truth had no small impact on the competition. The host said that the test data was correct, but I doubt that the error can really be verified when lever arm offsets of several 10 cm are mixed in. This could be a big problem if we are competing with a difference of a few centimeters since the ground truth bias error directly affects the final leaderboard.</p></li>
<li><p>The issue of invalid ADR (carrier phase) data on some GooglePixel 4/4XL runs was honestly something that could not be addressed; I feel that decimeter accuracy cannot be achieved without ADR. In the test data, there were problems with the <code>/2021-08-12-US-MTV-1/GooglePixel4</code> and <code>/2022-04-25-US-OAK-2/GooglePixel4</code>, but I think these runs should have been excluded from the test data.</p></li>
<li><p>I question the setup regarding the fact that the test data includes data that was run in a completely different region than the training data (e.g., runs of OAK and all runs of LAX). If machine learning is to be used in the solution, I believe that at least data that was run in the same region as the training data should have been provided.</p></li>
<li><p>After the competition, I noticed that the coordinates of the base station had an offset… After correcting the offset, I made a late submission and got Public: 1.372, Private: 1.197. This is the best score so far.<br>
<img src="https://user-images.githubusercontent.com/7933764/182088010-f608223c-acc5-412f-a0cc-ead5013663f5.png" alt="fig4"></p></li>
<li><p>If a similar competition is held next year, I am not sure if I will participate. I don't want to look at the GNSS logs on a smartphone for at least a while!</p></li>
</ul> |
Google Smartphone Decimeter Challenge 2022 | 26th Place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Smartphone Decimeter Challenge 2022
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I tried different weights, no magic there. If the variance estimates were accurate, then theoretically, the weights should all be equal. But since it seemed like there was some scale factor between variance estimates of RTKLIB and my Kalman smoother, the weights were needed. First I found a good pair of weights for the 2 best solutions, and added little bits of the 2 worst solutions. If I had more time, I could have done a more automated global search, but time ran out…</p>
<p>Thank you to all the competition organizers and the participants who shared their knowledge with the rest of us learning about GNSS. Who knew GPS positioning was so complicated?</p>
<p><strong>My solution was based on these 3 public notebooks and ideas:</strong></p>
<ol>
<li><p>"WLS" A solution based on "Carrier-Smoothing-WLS-Kalman Filtering" from<br>
<a href="https://www.kaggle.com/saurabhbagchi" target="_blank">@saurabhbagchi</a> - <a href="https://www.kaggle.com/code/saurabhbagchi/carrier-smoothing-robust-wls-kalman-smootherv2" target="_blank">https://www.kaggle.com/code/saurabhbagchi/carrier-smoothing-robust-wls-kalman-smootherv2</a><br>
which was based on<br>
<a href="https://www.kaggle.com/taroz1461" target="_blank">@taroz1461</a> - <a href="https://www.kaggle.com/code/taroz1461/carrier-smoothing-robust-wls-kalman-smoother" target="_blank">https://www.kaggle.com/code/taroz1461/carrier-smoothing-robust-wls-kalman-smoother</a></p></li>
<li><p>"RTKLIB" An RTKLIB solution <a href="https://www.kaggle.com/timeverett" target="_blank">@timeverett</a> - <a href="https://www.kaggle.com/code/timeverett/getting-started-with-rtklib" target="_blank">https://www.kaggle.com/code/timeverett/getting-started-with-rtklib</a>. I would especially like to commend RTKLIBExplorer for all his explanations and patience in answering many people's questions on RTKLIB.</p></li>
<li><p>"OFFSET" An offset correction based on <a href="https://www.kaggle.com/saitodevelo01" target="_blank">@saitodevelo01</a> - <a href="https://www.kaggle.com/code/saitodevel01/gsdc-bias-correction" target="_blank">https://www.kaggle.com/code/saitodevel01/gsdc-bias-correction</a></p></li>
</ol>
<p><strong>I used the above algorithms as follows:</strong></p>
<ol>
<li><p>Starting with the WLS solution, which uses an L1 optimization at each point to determine the receiver clock bias and thus the best X,Y,Z ECEF coordinates, I replaced the Kalman smoother with a more complicated model as follows:<br>
There were 9 states: (3) x,y,z, (3) the velocities in x,y,z, (3) the accelerations in x,y,z.<br>
The measurements were position and velocity from the WLS algorithm (vel comes from doppler shift).<br>
The measurement noise was as determined from WLS, which comes from the device_gnss uncertainty in Pseudorange and Pseudorange velocity, using the Jacobian of the L1 solution. I implemented Kalman Smoothing, which effectively runs the Kalman filter forward and backward, combining the results properly. The authority for this is "Applied Optimal Estimation" by Art Gelb. I tuned the process noise (Q) and thresholds (Mahalanobis) for throwing out measurements by simple trial and error. I added the OFFSET solution above to this, but it seemed like the offset got smaller as my solutions got better, so in the end perhaps this did not make much difference.</p></li>
<li><p>I generated the RTKLIB solution, and initially replaced the Q!=2 bad points it found with my solution from #1. Since my #1 was better than the other publicly available solutions, this solution put me ahead of all the other people trying out RTKLIB for the first time (Public LB 2.593)</p></li>
<li><p>I wanted to be able to use the base positions from RTKLIB to improve point positioning so that I could run my Kalman smoother on it, but was never able to get that to work.</p></li>
<li><p>I tried using the Ionosphere and Troposphere correction functions from RTKLIB instead of the ones from the device_gnss file, but did not see an improvement from that.</p></li>
<li><p>I realized that some of the device_gnss entries has blanks for baseline prediction and pseudoranges, yet RTKLIB was able to make predictions. Looking a little more closely, it seemed like you could generate pseudorange estimates from the timing numbers directly. So I generated 2 new solutions using the RTKLIB observation data, but running it through the WLS/Kalman Smoother algorithm.</p></li>
<li><p>I then developed a merge algorithm that took multiple solutions with weights W and computed the weighted mean, where the weights for each point were W divided by the variance estimate at each point. The tricky part here was dealing with NaNs properly. This method allowed me to remove the "bad" measurements from the RTKLIB solution, and other points I deemed to be outliers. I found small improvements by tweaking the variance estimates.</p></li>
<li><p>Running out of time, I did a simple search for the best weights to use, combining 4 solutions:<br>
(a) RTKLIB, (b) My WLS/Smoother soln, (c) WLS from RTKLIB's observation data, and (d) Kalman Smoothed WLS<br>
from RTKLIB.</p></li>
<li><p>Then, really out of time and ideas, I did a quick computation to see if there was any lat/lon offset in my solution vs. the ground truth. I found a very small offset and added that in as a correction, which resulted in about a .04 improvement.</p></li>
</ol>
<p><strong>Other notes:</strong><br>
I never did any fitting to the Leaderboard. All my calculations and improvements were done on the train set. Every time (almost) I submitted a solution, it improved, roughly equaling the improvement on the train set. I never picked parameters or methods based on feedback from the Leaderboard, I wanted to be "clean" from an overfitting perspective.</p> |
Google Smartphone Decimeter Challenge 2022 | 35th place: rtklib tweaking and factor graph approach | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Smartphone Decimeter Challenge 2022
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I tried two different approaches for this competition:</p>
<ul>
<li>rtklib based</li>
<li>global optimization factor graph approach</li>
</ul>
<p><a href="https://github.com/krunt/gsdcchallenge2" target="_blank">source code</a>.</p>
<h1>1. Rtklib tweaking</h1>
<p>Based on 'Getting started with RTKLIB' notebook.<br>
Tried tweaking different configuration parameters. <br>
Helped keeping cycle slip flags from receiver (slip_mask=3)<br>
and adding weighting by snr and pseudorange uncertainty:<br>
stats-errsnr: 0 -> 0.003<br>
stats-errrcv: 0 -> 0.1<br>
stats-errphaseel: 0.003 -> 0<br>
stats-errphase: 0.003 -> 0</p>
<p>Finally rtklib solution was postsmoothed by <a href="https://www.kaggle.com/code/taroz1461/carrier-smoothing-robust-wls-kalman-smoother" target="_blank">https://www.kaggle.com/code/taroz1461/carrier-smoothing-robust-wls-kalman-smoother</a> notebook.</p>
<p>Tried ensembling between weighted by elevation and weighted by snr -> worser results.<br>
Tried enabling ambiguity resolution- noisy data, false fixes -> no improvement. <br>
Tried estimate velocity (estval() function) not by doppler but by TDCP technique (adr),<br>
but after debugging understood that velocity from doppler (results from estvel())<br>
rarely (only in case of big pos variance) comes into ekf state, and velocity is got mostly from kalman state,<br>
did not much have time/success in modifying rtklib ekf model to adapt this.</p>
<h1>2. Factor graph approach</h1>
<p>Hope was that this is kind of global optimization model formulation (unlike ekf)<br>
and it can potentially help.</p>
<p>Adopted this code <a href="https://github.com/weisongwen/GraphGNSSLib" target="_blank">https://github.com/weisongwen/GraphGNSSLib</a>.</p>
<p>This approach is using preprocessing via rtklib and afterwards solves <br>
optimization problem via ceres solver.</p>
<p>It did not much work out of the box for this dataset:<br>
it relied on ambiguity resolution (but there were no fixes at all).</p>
<p>Factors which finally were used:</p>
<ul>
<li>double difference pseudorange factor with reference station (analog to rtklib for code measurements, ddres() function)</li>
<li>motion factor (distance between neigh epochs divided by dt - should be equal to est velocity)</li>
<li>doppler factor for velocity estimation (it was used only if cycle slip was detected, otherwise adr factor was used),<br>
estimation by doppler is close to estvel() function in rtklib</li>
<li>adr factor for velocity estimation (TDCP technique, see paper: Time-differenced carrier phases technique for precise GNSS velocity estimation), plus of tdcp - is removal of integer ambiguity term. </li>
</ul>
<p>After testing with above factors I did get results severely affected by outliers,<br>
so tried to remove outliers by switchable constraints/big initial residuals, got some kind of progress.</p>
<p>Also the resulting trajectory was nonsmooth after all above factors so added kind of acceleration constraints<br>
for big acceleration. And also postprocessed solution with kalman smoother based on notebook listed above.</p>
<h1>Submission</h1>
<p>For submission I used weighted average of these two approaches.</p>
<p>I knew about GTSAM, but didn't get time to test it.<br>
Ceres solver is very powerful and I believed that<br>
the main goal is good problem formulation and not optimizer tuning.</p> |
Google Smartphone Decimeter Challenge 2022 | 39th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Smartphone Decimeter Challenge 2022
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I am very excited that this competition makes me a Kaggle Competitions Expert!<br>
I would like to thank the hosts for holding this super fun competition.<br>
Here is a summary of my solution:</p>
<h1>RTKLIB Baseline</h1>
<h3>Config</h3>
<p>I did not make many changes to the configuration, but I did lower the elmask as it seemed to improve cross validation</p>
<h3>Base Stations</h3>
<p>I used the values from this <a href="https://www.kaggle.com/code/timeverett/getting-started-with-rtklib" target="_blank">notebook</a> - they should account for tectonic plate movement</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6537187%2Ff94dda00fdd6513006338031a8ac1965%2Fbase_stations_img.png?generation=1659210125676728&alt=media" alt=""></p>
<h1>Postprocessing</h1>
<h3>Outlier Correction</h3>
<p>I tried several outlier correction methods. I calculated the change in lat and lon between timesteps. If the car seemed to jump, it is flagged as an outlier. I then interpolated the outliers.</p>
<h3>Position Shift</h3>
<p>A lot of my improvement came from position shift style techniques. I created and used several variations of this algorithm. I got the idea from this <a href="https://www.kaggle.com/code/wrrosa/gsdc-position-shift" target="_blank">notebook</a> from last year's comp. The math below should give you a general idea of how it works. Alpha is hyperparameter tuned using the groundtruths.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6537187%2F73d75d7da0afb3cc1b88e7d0cad28875%2Fposition_shift_img.png?generation=1659203394492723&alt=media" alt=""></p>
<h3>Stop Mean</h3>
<p>I tried a lot of different ideas to account for the car stopping; however, it did not give me much improvement. I spent a lot of time trying to get a lgbm to predict the velocity then adjust for stopping. However, I could not get this to work well. I ended up just using a public algorithm I found, but it did not improve cross validation much.</p>
<h3>Other</h3>
<p>I tried a lot of techniques I saw from top solutions in last year's comp and tried many of my own ideas. I won't go into too much detail here, and I may use these if they decide to host this comp next year. I tried to correct errors in points when the car turned and stuff like that.</p>
<h1>Hyperparameter Tuning</h1>
<p>For the entire process, I compared my results to the ground truths. I excluded several paths form the ground truths especially paths with errors <a href="https://www.kaggle.com/competitions/smartphone-decimeter-2022/discussion/337416" target="_blank">see discussion</a>.</p>
<p>I created a Pipeline class and all my postprocessing follows the same format. As a result, it made it easy to use Bayesian Optimization to hyperparameter tune everything</p> |
Google Smartphone Decimeter Challenge 2022 | 3rd Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Smartphone Decimeter Challenge 2022
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, thanks to hosts for organizing this competition.<br>
This competition is one of the most patient competitions I have ever participated in, as there was a lot of missing data and a lot of domain knowledge required. I respect all the participants who ran this competition.</p>
<p>Here is my brief solution:</p>
<h2>Summary</h2>
<p>I think the key points to get gold medal in this competition are following two points.</p>
<ul>
<li>to estimate the velocity directly from ADR.</li>
<li>to correct the pseudorange bias using the reference station observations.</li>
</ul>
<p>These are mentioned by <a href="https://www.kaggle.com/taroz1461" target="_blank">@taroz1461</a> in <a href="https://www.kaggle.com/competitions/smartphone-decimeter-2022/discussion/335842#1850158" target="_blank">this discussion</a>.<br>
In addition to these two points, my solution improved the score by ensemble the results calculated with various settings.</p>
<p>The pipeline of my solution are following:</p>
<ul>
<li>Step1.Position estimation from GNSS Data using RTKLIB.</li>
<li>Step2.Velocity estimation from ADR in GNSS Data using WLS.</li>
<li>Step3.Position smoothing using Kalman smoothing and CostMinimization.</li>
</ul>
<p>My approach is based on some published notebook and is basic.</p>
<h3>1.Position estimation from GNSS Data using RTKLib.</h3>
<p>I used the published <a href="https://www.kaggle.com/code/timeverett/getting-started-with-rtklib" target="_blank">RTKLIB</a> and <a href="https://www.kaggle.com/code/timeverett/getting-started-with-rtklib-py" target="_blank">RTKLIB-py</a> to correct pseudorange by base station observations. <br>
(I wanted to implement the code myself,but I did not have the knowledge and enough time. Please let me know the good literature or code for reference.)</p>
<p>The output of RTKLIB has applied Kalmansmoothing, but the accuracy of velocity estimation seems to be lower than when using ADR in Step2. So I re-smoothed the RTKLIB output (that are corrected position by base station) with ADR velocity estimates in Step3.<br>
Also, I calculated position under several conditions such as thresh of satelite elevation angle and ensemble the results.<br>
Since RTKlib and RTKLib-py use different satellite types, the results of both were used for the ensemble.</p>
<h3>2.Velocity estimation from ADR Data using WLS.</h3>
<p>Based on this published code (<a href="https://www.kaggle.com/code/taroz1461/carrier-smoothing-robust-wls-kalman-smoother" target="_blank">carrier-smoothing-robust-wls-kalman-smoother</a>), I estimate the velocity directly from ADR by weighted least squares method.<br>
As mentioned in the <a href="https://www.kaggle.com/competitions/google-smartphone-decimeter-challenge/discussion/262406#1455510" target="_blank">1st place soluition</a> of the previous competition, the accuracy was higher than the result from Doppler Shift. When ADR is missing, the velocity calculated from Doppler shift is used.</p>
<p>The ADR data has many missing points. In particular, since the missing often occurs in the sudden acceleration and deceleration, it may be difficult to interpolate and occur the position errors around that point. So, I adjusted the parameters to reduce the missing as much as possible. For example, the threshold of elevation angle for satelite selection sets lower values. Also I ignored 'ADR State' to select satelite. These setting could be worsened score by multipaths or cycle slips, but my local validation score and LB score are improved.<br>
Although not confirmed, tuning the parameters for velocity estimation, which is separate from position estimation, may also have improved the score.</p>
<h3>3.Position smoothing using Kalman smoothing and CostMinimization.</h3>
<p>Postion smoothing is calculated from the position and velocity estimated above by two approaches.<br>
Kalman smoothing is based on this published code (<a href="https://www.kaggle.com/code/taroz1461/carrier-smoothing-robust-wls-kalman-smoother" target="_blank">carrier-smoothing-robust-wls-kalman-smoother</a>).<br>
CostMinimization is based on <a href="https://www.kaggle.com/code/saitodevel01/dsdc-unified-post-processing/notebook" target="_blank">5th placae solution in previous comp</a> .</p>
<p>These approaches did not have a large difference in score, but ensemble of the results improved score.<br>
Also, in each part, the ensemble results are adopted by changing parameters such as the update conditions of the Kalman filter.</p>
<h3>Acknowledge</h3>
<p><a href="https://www.kaggle.com/taroz1461" target="_blank">@taroz1461</a>, <a href="https://www.kaggle.com/saitodevel01" target="_blank">@saitodevel01</a> , <a href="https://www.kaggle.com/timeverett" target="_blank">@timeverett</a><br>
Thank you for sharing your great ideas and code.</p>
<p>Thank you for your comment.</p>
<p>In <a href="https://www.kaggle.com/code/taroz1461/carrier-smoothing-robust-wls-kalman-smoother" target="_blank">this notebook</a>, the score improved from 3.02 to 2.75 in Public LB by use velocity from ADR. <br>
Since I did not tune any parameters in this case, the exact effect is not known.</p> |
Google Smartphone Decimeter Challenge 2022 | 5th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Smartphone Decimeter Challenge 2022
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>The estimated time shift ranged from 430 to 470 ms for XiaomiMi8 and averaged 20 to 30 ms for other smartphones.<br>
The trend of Doppler time shift is consistent for smartphones other than XiaomiMi8, but the impact on the accuracy of position estimation will be negligible.</p>
<p>Thank you to hosts for hosting this great competition. <br>
At the start of this competition, I was concerned that this competition would not attract many Kagglers, because machine learning was not very effective for GNSS data in the last year competition, but I am surprised that the competition was at very high level in the end. <br>
I would like to publish my solution here:</p>
<h1>Summary</h1>
<p>I followed the <a href="https://www.kaggle.com/taroz1461" target="_blank">@taroz1461</a>'s <a href="https://www.kaggle.com/competitions/google-smartphone-decimeter-challenge/discussion/262406" target="_blank">1st place solution from last year competition</a> and combined it with <a href="https://www.kaggle.com/competitions/google-smartphone-decimeter-challenge/discussion/261959" target="_blank">my ideas of the 5th place solution</a>.</p>
<p>The key point of the solution is to formulate and solve the integration of GNSS observations (pseudorange, pseudorange-rate(doppler), ADR) and trajectory smoothing as a single optimization problem. This provides following advantages: accuracy of satellite positioning due to satellite constellation is automatically taken into account in trajectory smoothing, and satellite positioning can be continued even if the number of satellites is temporarily less than required. In addition, by adding switchable constraints to all GNSS observations, satellite positioning becomes robust.</p>
<p>Optimization was done by writing an evaluation function and a custom solver for handling equality constraints using TensorFlow (not NN). The flexibility of the deep learning framework was helpful for quickly testing ideas, but most ideas did not contribute to accuracy improvement in this competition. In the end, optimization part was fairly simple.</p>
<p><img src="https://user-images.githubusercontent.com/309785/181914831-1e6e3068-4046-4468-9070-dabc8d5975ef.png" alt="summary"></p>
<h1>Input Data & Preprocessing</h1>
<ul>
<li><p>Raw GNSS log (supplemental/gnss_log.txt): <br>
device_gnss.csv had null lines for SignalType, satellite position, etc. So I used gnss_log.txt and added columns to the obtained data-frame.</p></li>
<li><p>Observation of pseudorange of base stations:<br>
I used <a href="https://www.unavco.org/data/gps-gnss/gps-gnss.html" target="_blank">UNAVCO's</a> 15-second rate observation data and calculated the residual between the base station pseudorange and distance between satellite and base station. Although it is not necessary to correct for ionospheric and tropospheric delays in relative positioning, I corrected for ionospheric and tropospheric delays for both base stations and smartphone, since I also used base station data with long baseline length when performing multiple base station ensembles.</p></li>
<li><p>Precise ephemeris:<br>
I used satellite orbit data from <a href="https://cddis.nasa.gov/Data_and_Derived_Products/GNSS/gnss_mgex_products.html" target="_blank">GNSS IGS MGEX Products</a> to obtain the satellite's position and velocity. Although the accuracy of the satellite position is not so critical in relative positioning, it was easier to interpolate the position data from the precise ephemeris than to calculate the satellite orbit from the navigation message.</p></li>
<li><p>Base station location data:<br>
I downloaded time series data of base station location data from <a href="http://geodesy.unr.edu/index.php" target="_blank">Nevada Geodetic Laboratory</a>, and used the average over the test data period (2021/4/28 - 2022/4/25) as the base station location.</p></li>
<li><p><a href="https://cddis.nasa.gov/Data_and_Derived_Products/GNSS/atmospheric_products.html#iono" target="_blank">Ionospheric delay data</a>:<br>
It was used to compensate for the difference of ionospheric delay between the base station location and the smartphone location. The contribution to the final accuracy of using external data on ionospheric delay should be small, since the ionospheric delay should be mostly cancelled by relative positioning.</p></li>
<li><p>WlsPosition in device_gnss.csv:<br>
I used the smoothed results of the baseline locations provided by the host to pre-compute the Sagnac effect, ionospheric delay, and tropospheric delay.</p></li>
<li><p>I did not use IMU sensor data at all in this competition because sensor data is noisy and irregular. I did a lot of work on signal processing of sensor data in the last year competition, but it did not contribute much to the improvement of my score.</p></li>
<li><p>Since I did not use machine learning, I used ground-truth data only to check accuracy of prediction results.</p></li>
</ul>
<h1>Optimization</h1>
<h2>State-space representation of smartphone trajectory</h2>
<p>The position, velocity, acceleration, and jerk (derivative of acceleration) of the smartphone are represented in the ECEF coordinate system, and discretized state equations are created assuming that the jerk is piecewise constant.</p>
<p><img src="https://user-images.githubusercontent.com/309785/181914830-93f67930-4d52-4b62-a6b1-de88f09867cd.png" alt="state_equation"></p>
<p>Under this assumption, the position of the smartphone is represented by a piecewise cubic polynomial, so the position of the smartphone at any time can be accurately calculated by third-order Hermite interpolation using the position and velocity of collocation points (same for velocity).</p>
<p>Using the position and velocity of the smartphone calculated by this method and the position and velocity of the satellites, the distance between the satellites and the smartphone and its derivative are calculated, and the trajectory of the smartphone is calculated by minimizing the residual between them and the GNSS observation data.</p>
<h2>Position smoothing via quadratic programing</h2>
<p>Smoothing is performed on the positions obtained by the weighted least squares method of psedurorange by solving a quadratic programming problem with the sum of the squares of the position residuals and jerks as the evaluation function.<br>
The coefficients of the position observation matrix are calculated based on Hermite interpolation of the smartphone position based on the satellite positioning time.</p>
<p>In last year's competition, I added velocity and acceleration terms to the evaluation function of a similar quadratic programming problem to improve accuracy, but in this competition, I did not use doppler and ADR observations at this point because the purpose of quadratic programming problems is to obtain initial values for the global optimization that follows.</p>
<h2>Global optimization</h2>
<p><img src="https://user-images.githubusercontent.com/309785/181914822-b29b3c62-4973-4320-b86c-c1ddf800c332.png" alt="computation"></p>
<p>The evaluation function is calculated by the computation graph shown above by TensorFlow.<br>
In this figure, the unknown variables (trainable variables) of the optimization are shown in red, pre-computable constants in blue. The inter-signal range biases are parameters shared to all times for each SignalType.</p>
<p>Due to the availability of base station observations, only GPS, GLONASS, and Galileo were used for pseudorange observation data. For the doppler and ADR observations, I used data from all constellations, including QZSS and BeiDou.<br>
In addition, the weighting of residuals by RecievedSvTimeUncertaintyNanos, PseudoRangeUncertaintyMetersPerSecond, and AcuumulatedDeltaRangeUncertaintyMeters contributed to accuracy improvement.</p>
<p>To take into account the equality constraints of the state equation, augmented Lagrangian method is used to optimize the evaluation function defined by the above computation graph.</p>
<p><img src="https://user-images.githubusercontent.com/309785/181914819-7604ffec-7d19-4940-a3f0-bcd261917516.png" alt="augmented_Lagrangian"></p>
<p>Where f(x) is the evaluation function and h(x) is the equality constraint corresponding to the above state equations.<br>
More specifically, the following steps are repeated alternately: <br>
(1) minimizing the augmented Lagrangian by Adam optimizer.<br>
(2) updating the dual variables of the equality constraints (λ).</p>
<p>Convergence of this optimization is slow! But, it can robustly compute solutions with good accuracy.</p>
<p>Note that double precision floating point is used in all computations to eliminate floating point rounding errors. For this reason, optimization is performed using the CPU instead of the GPU.</p>
<h1>Workaround for pseudorange-rate of XiaomiMi8</h1>
<p><img src="https://user-images.githubusercontent.com/309785/181914836-97fa4def-3d2f-4d55-9534-33fd43fc8463.png" alt="velocity_XiaomiMi8"></p>
<p>This figure shows comparison between ground-truth velocity and estimated velocity by weighted least-squares method using pseudorange-rate in 2021-12-07-US-LAX-1/XiaomiMi8.<br>
This shows that there is a time discrepancy of about 0.5 seconds between the estimated velocity and the ground-truth (WHY?). Therefore, correcting for this time discrepancy improves the accuracy of position estimation.<br>
In my solution, this problem is addressed by a correction term (acceleration × time delta) for smartphone velocity and adding the time delta parameter to the optimization variables. (This workaround is applied to data other than XiaomiMi8.) This was quite effective in improving public LB, because there are many XiaomiMi8 in public test data.</p>
<p><img src="https://user-images.githubusercontent.com/309785/181914823-1272523e-2a04-4d4a-b6d8-e0bf64797501.png" alt="modify_velocity"></p>
<h1>Ensemble of multiple base station solutions</h1>
<p><img src="https://user-images.githubusercontent.com/309785/181917131-42d34aa8-8e9a-4fa3-b9ca-c06204c6fe58.jpg" alt="google-earth"></p>
<p>I had originally planed to interpolate solutions from multiple base stations based on the location of base stations and smartphone, but I found that the correlation between baseline length and position error was not so clear, and simply averaging the solutions from multiple base stations was slightly more accurate. The reason for this is assumed to be that the pseudorange data of the base stations also contains noise of about 0.5 to 1.0 m. Therefore, the median of the solutions of multiple base stations was used as the final submission, after correcting the difference between the ionospheric and tropospheric delays at the base stations and smartphone locations using external data and the model.</p>
<ol>
<li>I used Tensorflow's low-level API (e.g., tf.Variable) to create the computation graph, not Keras API. The unknown variables of position, velocity, acceleration, and jerk are equally spaced data in 1 second increments, but since the position and velocity corresponding to each satellite observation epoch are interpolated and then replicated by tf.gather, it does not matter if the number of satellites is different for each epoch.</li>
<li>I think the answer to that question would be full-batch, since the gradient is calculated using all the data for each iteration.</li>
<li>It takes more than 24 hours to inference 206 train and test data without ensembles, run in 2 parallel on a 16-core CPU, which would be about 15 minutes per data. Because it is better to spend more time than to truncate iterations and reduce the accuracy in csv-competition, I did not even implement a convergence criterion. The score improved when I just increased the number of iterations in the last submission, so I could have made more improvements about the optimization algorithm.</li>
</ol>
<p>Thank you for your question.<br>
1) Since I knew from the previous competition that ADR (carrier phase) is the key to high accuracy, so I did not properly examine the patterns excluding ADR. Observation data with resets and cycle slips in AccumulatedDeltaRangeState or with timestamp differences of 1.5 sec or more are ignored by multiplying the residuals by zero.<br>
The square of AccumulatedDeltaRangeUncertaintyMeters plus the default value (hyperparameter) is used as the variance of the ADR observation. Otherwise, the method is the same as in <a href="https://nikosuenderhauf.github.io/assets/papers/IROS12-switchableConstraints.pdf" target="_blank">the original paper</a>.<br>
2) The average improvement of the ensemble was about 0.05m, but not all data showed an improvement.</p> |
Google Smartphone Decimeter Challenge 2022 | 6th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Smartphone Decimeter Challenge 2022
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to the hosts and competitors for a great competition! I have enjoyed participating as well as helping introduce others to RTKLIB. I've also learned a lot, not just from working with the smartphone data, but also from reading all of the interesting discussion threads and notebooks.</p>
<p>I had already shared much of my solution in the <a href="https://www.kaggle.com/code/timeverett/getting-started-with-rtklib" target="_blank">"Getting Started With RTKLIB" </a>notebook but I have just updated it to<br>
include my best RTKLIB solution for the test data set. This includes one of two config files I used in my final submission as well as my most recent version of the RTKLIB library code and android_rinex library code. This will provide a score of 1.597 on the private leaderboard and 1.753 on the public leaderboard. Note that this doesn't include any post-processing of the RTKLIB results other than replacing the data that contains hardware clock discontinuities with other publicly available solutions, so it should not be difficult to improve on these results. Also note that the private score for this solution is lower than my final submission, since for my submission, I merged these results with a solution from a second config file. The other solution improved scores for the training data and the public test data but unfortunately it made things worse for the private test data.</p>
<p>I have added the resulting submission file to the notebook input data for anyone who would like to use it as an input to their post-processing algorithms and don't want to have to get RTKLIB working on their system. I would be curious how this fares when combined with some more advanced post-processing since I focused almost entirely on the GNSS solution itself and only did minimal post-processing in my submissions. If you do try this, please share your results in the comments below.</p>
<p>There was approximately one meter of improvement between this solution and the previous version of the notebook. This was the culmination of many small improvements in code, configuration, and input files, and not from any single major breakthrough. Changes included:</p>
<p><strong>Satellite Navigation Files</strong></p>
<ul>
<li>Replaced the navigation files from UNAVCO with files from CDDIS since the UNAVCO files seemed to be missing some data</li>
</ul>
<p><strong>Android_Rinex code - conversion from raw data to rinex</strong></p>
<ul>
<li>Minor cleanup for handling the hardware clock discontinuities</li>
</ul>
<p><strong>RTKLIB Code - PPK solutions</strong></p>
<ul>
<li>Changed the pseudorange outlier threshold into a config parameter, previously it was a function of the phase outlier threshold</li>
<li>For measurement differencing, replaced the closest previous base observation with the closest base observation</li>
<li>Adjusted the phase measurement variances by frequency as well as by constellation</li>
<li>Tightened up the outlier threshold adjustments for the initial observation after acquire</li>
</ul>
<p><strong>RTKLIB Configuration parameters</strong></p>
<ul>
<li>Switched the raw measurement weighting from elevation based to SNR based</li>
<li>Increased the weight of L5 measurements, and reduced the weight of L1 measurements</li>
<li>Increased the weight of the pseudorange measurements, and reduced the weight of the phase measurements</li>
<li>Opened up the measurement filter thresholds (min elevation, cycle slip detect, outlier thresholds) to include more measurements in the result</li>
<li>Reduced the acceleration process noise in the kalman filter ( a little reduction is a good thing, too much is what hurt my final submission on the private test data)</li>
<li>Set the measurement weighting adjustment for satellite clock stability to zero</li>
<li>Disabled the interpolation of base observations</li>
</ul>
<p>For more details, see the code changes in the Github repositories and the config file changes in the notebook. If you have any questions about these changes or about RTKLIB in general, please ask in the comments below, I'm always happy to share what I know about RTKLIB. You can also visit <a href="https://rtklibexplorer.wordpress.com/" target="_blank">my blog</a> for more information on using RTKLIB.</p>
<p>1) RTKLIB does estimate velocity using doppler measurements but does not use the results for PPK solutions.<br>
2) I believe that UME's 3rd place finish comes from effectively adding velocity estimation to the RTKLIB solution (in post-processing), so, yes, I do think it can help. I suspect the benefit comes more from including the doppler measurements than it does from the velocity estimation directly, but I think that independent velocity estimation is an effective method to integrate the doppler measurements into the solution.<br>
3) No, ideally RTKLIB solutions should accurately estimate the acceleration of the receiver making the measurements. I suspect you are referring to the relatively low acceleration process noise configuration parameters I used in my solution. Typically, these parameters are chosen to match the acceleration characteristics of the receiver but reducing these will effectively increase the weight of the kalman filter's estimate of receiver location and makes it a little easier to reject outlier measurements. The cost is that the solution can lag behind the actual movement during periods of high acceleration. I tried to find a balance between these two factors, given the very large number of outliers in these data sets.</p> |
Google Smartphone Decimeter Challenge 2022 | [11th Place] Solution: RTK, CV, and two Post-Processing are all you need | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Smartphone Decimeter Challenge 2022
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, thanks to the organizers and Kaggle for organizing this unique challenge. We would also thank the other participants. It's the competition that makes things more interesting. Thanks to my teammates <a href="https://www.kaggle.com/linwei9" target="_blank">@linwei9</a>, <a href="https://www.kaggle.com/chris62" target="_blank">@chris62</a>, <a href="https://www.kaggle.com/gmhost" target="_blank">@gmhost</a>, and <a href="https://www.kaggle.com/rytisva88" target="_blank">@rytisva88</a>. It's team work that won us a gold medal.</p>
<p>Our solution is an ensemble of <a href="https://www.kaggle.com/code/timeverett/getting-started-with-rtklib" target="_blank">RTK</a> and <a href="https://www.kaggle.com/code/taroz1461/carrier-smoothing-robust-wls-kalman-smoother" target="_blank">taro's public baseline</a>. Thank you <a href="https://www.kaggle.com/timeverett" target="_blank">@timeverett</a> <a href="https://www.kaggle.com/taroz1461" target="_blank">@taroz1461</a> for your great contribution. These two codes help newbies like me to get started with GNSS positioning quickly. But since we didn't end up making too much improvement to the taro's public baseline, RTK played the most part in our final solution.</p>
<h2>Modification to Taro's public baseline</h2>
<ol>
<li><p>Switch the Loss function of WSL of position estimation and velocity estimation from <code>soft_L1</code> loss to <code>Cauchy</code> loss. </p></li>
<li><p>Optimize the parameters of Kalman filter on phone-wise with Optuna. It helped LB, but not really helped pb.</p></li>
<li><p>Predict whether the car is stopping with a threshold, and average all the stopping points.</p></li>
<li><p>Use LGB to predict relative position. It can help the baseline when the loss function is <code>soft_L1</code> loss. But it can't help when it's Cauchy loss.</p></li>
</ol>
<h2>Modification to RTK's baseline</h2>
<h3>Tuning the parameters</h3>
<ol>
<li><p>Go to <a href="https://rtklibexplorer.wordpress.com/resources/" target="_blank">RTKExplorer's blog</a>, download the Demo5 User Manual Introduction.</p></li>
<li><p>Read the demo5 manual to understand the meaning and default values of each parameter specified in the open source RTK notebook from the manual.</p></li>
<li><p>Fine-tune each parameter around the default values in the manual and the parameter values in the open source code. The scores of RTK for the whole training set (excluding wrong paths) are calculated after each adjustment. We found that the CV and LB are inconsistent when tuning the parameters only on certain phones. But if run on the whole dataset, CV has the same boosting/decreasing trend as LB, although they are not consistent.</p></li>
<li><p>Ensemble the prediction of the best parameter on the training data, and several sub-optimal parameter</p></li>
</ol>
<h3>Adding Reference Station</h3>
<p>Using multiple reference stations for the rtk solution, and averaging the results</p>
<h2>Ensemble</h2>
<h3>Calculate the score of each kind of phone!</h3>
<ol>
<li><p>You can get the score of each kind of phone on training data with <a href="https://www.kaggle.com/forcewithme/gsdc2022-get-the-score-on-training-set" target="_blank">this notebook</a>. You can also search for the ensemble weight of two baselines at the end. This step is really important, as you can find that <strong>rtk is so good at dealing with GooglePixel5, Samsung, and Xiaomi</strong>, which are all the kinds of phones in PB. Especially for Google5 and Samsung, which are the most part in PB, RTK can reach a score of another level compared to the other baseline. You can also find that it is so bad on GooglePixel4, and GooglePixel4XL. </p></li>
<li><p>With a weight of 0.65:0.35 for RTK and taro's public baseline(modified by us), we can reach our highest LB rank, which also ranks 11 on LB.</p></li>
<li><p>But given that GooglePixel5 and Samsung take the most part of PB, we expect a huge shake on PB. Since RTK plays really well on training sets for GooglePixel5 and Samsung. We decided to give it a much higher weight for PB. In the end, we used a weight of 0.8: 0.2 to get 11th place on PB.</p></li>
</ol>
<h3>Post Processing</h3>
<ol>
<li>Replace the outliers of RTK. </li>
</ol>
<p>There are some obvious outliers in the public RTK baseline(python). We simply replace them with the points of taro's baseline. <br>
<a href="https://imgloc.com/i/Ft1PB" target="_blank"><img src="https://s1.328888.xyz/2022/08/01/Ft1PB.png" alt="Ft1PB.png"></a><br>
<a href="https://imgloc.com/i/FtKuP" target="_blank"><img src="https://s1.328888.xyz/2022/08/01/FtKuP.md.png" alt="FtKuP.md.png"></a></p>
<p>Even if an RTK submission scores 2.4(Red) in LB, there will be some strange points. We also replace with taro's public baseline(blue).<br>
<a href="https://imgloc.com/i/FtvBd" target="_blank"><img src="https://s1.328888.xyz/2022/08/01/FtvBd.md.png" alt="FtvBd.md.png"></a></p>
<ol>
<li>Average the stopping points.</li>
</ol>
<p>We use the code in <a href="https://www.kaggle.com/code/robikscube/smartphone-competition-2022-twitch-stream" target="_blank">rob's notebook</a> to predict whether it's stopping, and average all of the stopping points.<br>
We noticed that even after ensemble, there are some noisy stopping points. So we used that code to average them.<br>
<a href="https://imgloc.com/i/FyVHF" target="_blank"><img src="https://s1.328888.xyz/2022/08/01/FyVHF.md.png" alt="FyVHF.md.png"></a><br>
<a href="https://imgloc.com/i/FyIIo" target="_blank"><img src="https://s1.328888.xyz/2022/08/01/FyIIo.md.png" alt="FyIIo.md.png"></a></p>
<p>But we don't think treating them as the same point is a good idea. When we are driving on our own and need to brake, we cannot suddenly slow down from 100 mph to 0 mph. Usually we gradually decelerate to 0 over a period of a few seconds. So in the end, I average the stop mean and the original predictions. We call it 'rolling stop mean'.</p>
<p><a href="https://imgloc.com/i/Fygvy" target="_blank"><img src="https://s1.328888.xyz/2022/08/01/Fygvy.md.png" alt="Fygvy.md.png"></a><br>
<a href="https://imgloc.com/i/FyUwC" target="_blank"><img src="https://s1.328888.xyz/2022/08/01/FyUwC.md.png" alt="FyUwC.md.png"></a></p> |
Google Smartphone Decimeter Challenge 2022 | [43rd place] RTKLIB & Lots of Postprocessing | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Smartphone Decimeter Challenge 2022
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to the hosts and to eveyone who took part and contributed to this very interesting and challenging competition. I am very much looking forward to reading through write-ups from the gold medal teams. As you will gather from what follows, I concentrated on firstly getting RTKLIB to work for me, and secondly on adding layers of postprocessing.</p>
<p><strong>Step 0. My starting point was a slightly adjusted version of <a href="https://www.kaggle.com/taroz1461" target="_blank">@taroz1461</a>’s notebook</strong></p>
<p><a href="https://www.kaggle.com/code/taroz1461/carrier-smoothing-robust-wls-kalman-smoother" target="_blank">Carrier Smoothing + Robust WLS + Kalman Smoother</a></p>
<p>Public score 2.964, Private score 3.462</p>
<p>Note that I clearly had overfitted this to the public LB, as my version was worse than the original on the private set.</p>
<p>Subsequent steps are listed in logical rather than temporal order.</p>
<p><strong>Step 1. RTKLIB.</strong></p>
<p><a href="https://www.kaggle.com/timeverett" target="_blank">@timeverett</a> provided a notebook</p>
<p><a href="https://www.kaggle.com/code/timeverett/getting-started-with-rtklib" target="_blank">Getting Started with RTKLIB</a></p>
<p>giving detailed instructions on the installation of RKTLIB. From the quoted LB scores, this offered an improvement of close to 0.3 m on the public LB and more than 1.0 m on the private set, more than the cumulative effect of the other steps of my workflow. Nonetheless, it was far from straightforward to get the software working correctly in my Windows environment. Most of my issues arose from the code not liking Windows paths. My initial problem turned out to be that I'd tried to run the code from a Windows folder with a space in its name, the space arising from the formatting of my user name, which threw an error. I therefore created a second instance, this time in a folder with no space in the name. The python code needed to be told explicitly about Windows paths. Thus, in step 4 of the process defined in <a href="https://www.kaggle.com/timeverett" target="_blank">@timeverett</a>'s notebook (so step 4 of step 1 in my nomenclature - sorry!), I changed the python code to explicitly add the required paths in the correct OS format. After three separate days of struggle, I finally got the code to run correctly.</p>
<p>Public score 2.682, Private score 2.189</p>
<p><strong>Step 2. Generic Bias Correction</strong></p>
<p>The excellent notebooks</p>
<p><a href="https://www.kaggle.com/code/saitodevel01/gsdc-bias-eda" target="_blank">GSDC - Bias EDA</a></p>
<p>and</p>
<p><a href="https://www.kaggle.com/code/saitodevel01/gsdc-bias-correction" target="_blank">GSDC - Bias Correction</a></p>
<p>by <a href="https://www.kaggle.com/saitodevel01" target="_blank">@saitodevel01</a>, written for the previous GSDC competition in 2021, demonstrate how the positions of the phone and driver in the car create biases between the recorded and actual positions in both the front-back and left-right directions.<br>
In outline, the phone has better sightlines to satellites located in front of the car, since it is placed close to the to front windscreen, whereas sightlines of signals from satellites located behind the car may be compromised. Similarly, the driver sits to the left of the phone (these data came from the USA) and may block signals from satellites located to the left, whereas those to the right should have better communication. This would be expected to lead to an artefactual error locating the phone too far forward and too far to the right; hence the anticipated correction would move the phone backwards and leftwards.</p>
<p>I analysed <a href="https://www.kaggle.com/saitodevel01" target="_blank">@saitodevel01</a>'s work from the 2021 competition, here is an example plot showing <a href="https://www.kaggle.com/saitodevel01" target="_blank">@saitodevel01</a>'s latitude correction against the cosine of the angle between the car's current direction and due north:</p>
<p><img src="https://i.imgur.com/HjF7Nmh.png"></p>
<p>The corresponding plot for longitude looks broadly similar in form. My simplification is to treat the ellipse as the signal I want to model, and ignore all other points. That leads to a constant "lever arm" type correction of these biases, which I express as simple trigonometric functions of the direction the car is pointing in any one time step. Thus, I can compute latitude and longitude corrections for each predicted position.</p>
<p>My analysis of the prior work suggests that the predicted uniform generic correction is:</p>
<p>40 cm from front to back; 27 cm from right to left.</p>
<p>I added these corrections to the originally predicted latitude and longitude to generate a trial postprocessed submission.<br>
However, some trial and error showed that the best public LB scores came from a somewhat larger front-back correction of about 1.3 times that previously suggested. This works out as about 52 cm front to back correction.</p>
<p>For the left-right correction, surprisingly the best empirical public LB results came from a small correction of about 8 cm from left to right, that is in the counterintuitive direction. However, for simplicity, I set this generic left-right correction to zero.</p>
<p>Public score 2.595, Private score 2.158</p>
<p><strong>Step 3: Phone-Specific Bias Correction</strong></p>
<p>Some consideration of possible relevant effects led me to consider whether the bias correction might depend on the model of phone. There are various possible explanations for this. Possibly some aspect of the phone's hardware or software, or alternatively some quirk of the individual devices used in the experiment, meant that some phones were better able than others to communicate with satellites via compromised sightlines or through atmospheric or environmental interference. Hence, corrections could vary between phones. Alternatively, if each phone has a designated holder or position in the car, its results may be affected by sightlines and/or lever arm correction errors related to that position. For example. one phone holder may have better views backwards or to the left than another, depending on the extent to which the driver, seats, or experimental equipment block the view. This seemed worth investigating, so a brief analysis was made using training set data, giving these results:</p>
<p>GooglePixel4 <br>
Forward error -0.1798<br>
Relative to dataset -0.1639<br>
Left error -0.3383<br>
Relative to dataset -0.0136</p>
<p>GooglePixel4XL <br>
Forward error 0.1375<br>
Relative to dataset 0.1535<br>
Left error -0.2301<br>
Relative to dataset 0.0947</p>
<p>GooglePixel5 <br>
Forward error -0.7068<br>
Relative to dataset -0.6909<br>
Left error 0.2203<br>
Relative to dataset 0.5450</p>
<p>SamsungGalaxyS20Ultra <br>
Forward error 0.1469<br>
Relative to dataset 0.1628<br>
Left error -1.0847<br>
Relative to dataset -0.7599</p>
<p>XiaomiMi8 <br>
Forward error 0.4559<br>
Relative to dataset 0.4718<br>
Left error -1.0740<br>
Relative to dataset -0.7493</p>
<p>It seemed that it was worth trying a phone-specific set of corrections. Firstly, I looked at the front-back corrections. I only accept these corrections if they are in the same direction as the training results, and (for the front-back correction) no larger than the original generic correction. We can do this for the four phone models that are common to the training and test data; GooglePixel4, GooglePixel5, XiaomiMi8 and SamsungGalaxyS20Ultra. There is some trial and error involved here; the two (direction and size) constraints mentioned above are designed to limit public-score chasing. I also implemented phone-specific left-right corrections.</p>
<p>This is implemented in the notebook</p>
<p><a href="https://www.kaggle.com/code/jbomitchell/phone-specific-bias-corrections-clipped" target="_blank">Phone-specific Bias Corrections (Clipped)</a></p>
<p>Public score 2.508, Private score 2.094</p>
<p><strong>Step 4: Stop averaging.</strong></p>
<p>Any car journey is likely to involve periods when the vehicle is not going anywhere, whether parked up at the beginning or end, stopped at a light, or waiting for a junction or road to become clear. When we look at the data, we see substantial sequences of points where the position only changes slightly, these drifts of a few cm generally being artefacts. Despite the car being stationary, the GNSS position still appears to wander around a little, which leads to the idea of averaging coordinates over sets of points where the car was stopped. A threshold of 1 m/s seemed to work fairly well for many of my predictions, with most submission scores being improved. Some, but not all, submissions benefitted from pasting the output of the stop averaging process directly back into another iteration of the same computation. However, the RTKLIB solutions could not be improved by stop averaging. </p>
<p>Public and private scores were not improved for my best submissions.</p>
<p><strong>Step 5: Tectonic correction.</strong></p>
<p>I implemented corrections for the change in location of the base stations due to tectonic movement. The relevant base stations were: SLAC for Bay Area trips, with ITRF2014 velocity:<br>
northward = 0.0164 m/yr, eastward = -0.0343 m/yr, upward = 0.0009 m/yr<br>
Station VDCY for Los Angeles trips, with ITRF 2014 velocity:<br>
northward = = 0.0131 m/yr, eastward = -0.0374 m/yr, upward = 0.0009 m/yr<br>
The RTKLIB calculations underlying my submission used a single set of base station positions computed to be correct midway through the time period over which the data were being collected, VDCY being selected for Los Angeles trips and SLAC for Bay Area trips. The distance through which I need to shift the predication is proportional to the (signed) number of days before or after the reference date that the run takes place (N), with the annual position shift being multiplied by (N/365.25) and assigned the appropriate sign depending on the direction of the time shift.<br>
I didn’t end up choosing a submission including this correction, based on the public score, but in fact it would have given a small improvement on the private set. The notebook is:</p>
<p><a href="https://www.kaggle.com/code/jbomitchell/tectonic-correction" target="_blank">Tectonic Correction</a></p>
<p>Public score 2.508, Private score 2.093</p>
<p><strong>Step 6. Outlier correction.</strong></p>
<p>In principle, it sounds straightforward to make a simple projection or projections of where a point should be predicted to lie, and compare with the actual prediction. There’s a good notebook<br>
<a href="https://www.kaggle.com/code/ravishah1/gsdc2-savgol-filter-outlier-removal-with-bo" target="_blank">GSDC2 - Savgol Filter + Outlier Removal with BO</a><br>
that implements an algorithm of this kind. However, I found no improvements from applying this kind of approach to what I’d consider “reasonably good” submissions. Even when I focused on fixing just the most egregious 100 or so outliers, I still made my score worse. I suspect it is hard to do what the competition metric requires, which is essentially to fix a “worst 5%” prediction well enough to move it out of the highest error 95%. I had some further ideas as to how to get this to work, but with the likely payoff rather small, I did not pursue them.</p>
<p>Public score was not improved. One of my outlier correction models would marginally have improved the private score.</p> |
Google Universal Image Embedding | 10th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Universal Image Embedding
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congrats to all the winners, and thanks to organizers and all participants. This is a joint writeup with my wonderful teammate <a href="https://www.kaggle.com/yamash73" target="_blank">@yamash73</a> and <a href="https://www.kaggle.com/ksork6s4" target="_blank">@ksork6s4</a> (thanks to awesome team work).</p>
<p><strong>Things that worked</strong></p>
<ul>
<li>Selection of dataset</li>
<li>BatchNorm1d in head</li>
<li>Ensemble of OpenCLIP backbones</li>
</ul>
<p><strong>Dataset</strong><br>
We have noticed that Public LB scores vary greatly depending on the combination of datasets. Therefore, we are exploring the best combination of data sets in the early stages of the competition. Comparison of experimental results shows that "Inshop", "Instre" and "Consumer-to-shop" are effective. Based on these results, we have decided to use "Products10k", "Landmark", "Instre" and "Consumer-to-shop" as our basic dataset combination . Here "Inshop" and "Consumer-to-shop" are subsets of deepfashion, so only "Consumer-to-shop" is selected as our combination of datasets.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4241674%2Fe153943075ee571c7b42182c0a3a41eb%2F2022-10-11%20214238.png?generation=1665492226981590&alt=media" alt=""></p>
<p>NOTE:</p>
<ul>
<li>The results of separate experiments confirm that Products10k and Landmark contribute to the LB scores</li>
<li>For Inshop, Stanford products, and Consumer2shop, classes with a sample size more than 5 are used</li>
</ul>
<p>Our final combination of datasets is as follows:</p>
<ul>
<li>Products10k </li>
<li>Landmark</li>
<li>Instre</li>
<li>Consumer-to-shop</li>
<li>Food101 (added in the last week)</li>
</ul>
<p><strong>Model</strong><br>
The backbone of our final model is an ensemble of 3 OpenCLIP image encoders. OpenCLIP backbones contributed to the huge improvement of the LB score. We refered to the head/loss design from the past competitions (mainly Google Landmark Retrieval 2021).</p>
<ul>
<li>Backbone: OpenCLIP ViT-B + OpenCLIP ViT-L + OpenCLIP ViT-H</li>
<li>Head: Linear + BatchNorm1d + PReLU + subcenter ArcFace(k=3)</li>
<li>Loss: ArcFace Loss with dynamic margin (s=30.0, mx=0.65, my=0.15) + focal loss</li>
</ul>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4241674%2Fe9e01248f4d6655c00d75348c54aeda5%2F1.png?generation=1665490798502330&alt=media" alt=""></p>
<p><strong>Training</strong><br>
Only head is trained like other teams. At first, we set the total epochs to 20, but after experiments showed that we could achieve the same level of performance with 5 epochs, we were able to increase the number of trials.</p>
<ul>
<li>Total epochs: 5</li>
<li>Batch size: 512</li>
<li>Optimizer: Adam (lr=1e-2, weight decay=1e-4)</li>
<li>Scheduler: CosineAnnealingLR (eta_min=1e-3, T_max=5)</li>
<li>Augmentation: RandomResizedCrop, HorizontalFlip, ImageCompression, HueSaturationValue, RandomBrightnessContrast, Blur, ShiftScaleRotate, CoarseDropout</li>
</ul>
<p><strong>Things that didn't work</strong></p>
<ul>
<li>Dimensional reduction of embeddings using PCA</li>
<li>Training OpenCLIP ViT without freezing backbones</li>
<li>Cropping with saliency models (preprocess)</li>
<li>TTA (horizontal flip)</li>
<li>AutoEncoder</li>
<li>Other architectures of head</li>
</ul>
<p><a href="https://www.kaggle.com/evilpsycho42" target="_blank">@evilpsycho42</a> <br>
We tested 1e-3 or 1e-4 after head-only training and scores did not improve.<br>
But we tried unfreezing training only in the beginning of the competition, so this may not be sufficient. </p>
<p>We also referred to this solution and parameters:</p>
<ul>
<li>solution: <a href="https://www.kaggle.com/competitions/landmark-retrieval-2021/discussion/277099" target="_blank">https://www.kaggle.com/competitions/landmark-retrieval-2021/discussion/277099</a></li>
<li>code: <a href="https://github.com/ChristofHenkel/kaggle-landmark-2021-1st-place/tree/main/configs" target="_blank">https://github.com/ChristofHenkel/kaggle-landmark-2021-1st-place/tree/main/configs</a></li>
</ul>
<p><a href="https://www.kaggle.com/evilpsycho42" target="_blank">@evilpsycho42</a> <br>
Thank you for your comment and question!</p>
<p>The training loss after 5 epochs is 18.2422.<br>
This is the sum of ArcFace loss and focal loss.</p>
<p><a href="https://www.kaggle.com/manwithaflower" target="_blank">@manwithaflower</a> <br>
Thank you for your question!</p>
<p>We could fit 3 models into 9 hours limit without any special changes.<br>
The approximate submission times for each model are as follows:</p>
<ul>
<li>ViT-H: 5 hours</li>
<li>ViT-L: 2.5 hours</li>
<li>ViT-B: 1 hour (based on the submission time of 3 ensemble)</li>
<li>ViT-H + ViT-L + ViT-B: 8.5 hours</li>
</ul>
<p>In our training phase, fp16 is used for 3 backbones but it is not used for the submission model. This is our submission notebook. <br>
<a href="https://www.kaggle.com/code/ksork6s4/guie-submission-openclip-3models-ensemble" target="_blank">https://www.kaggle.com/code/ksork6s4/guie-submission-openclip-3models-ensemble</a></p>
<p>Additional information:<br>
ViT-B is used in our final model, but it may not be necessary as similar scores can be reached with only ViT-H and ViT-L.</p>
<p><a href="https://www.kaggle.com/jesucristoe" target="_blank">@jesucristoe</a> <br>
Thank you for your question and congratulations on 4th place!</p>
<p>We could fit 3 models into 9 hours limit without any special changes.<br>
The approximate submission times for each model are as follows:</p>
<ul>
<li>ViT-H: 5 hours</li>
<li>ViT-L: 2.5 hours</li>
<li>ViT-B: 1 hour (based on the submission time of 3 ensemble)</li>
<li>ViT-H + ViT-L + ViT-B: 8.5 hours</li>
</ul>
<p>In our training phase, fp16 is used for 3 backbones but it is not used for the submission model. This is our submission notebook. <br>
<a href="https://www.kaggle.com/code/ksork6s4/guie-submission-openclip-3models-ensemble" target="_blank">https://www.kaggle.com/code/ksork6s4/guie-submission-openclip-3models-ensemble</a></p>
<p>Additional information:<br>
ViT-B is used in our final model, but it may not be necessary as similar scores can be reached with only ViT-H and ViT-L.</p> |
Google Universal Image Embedding | 12th Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Universal Image Embedding
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank to host for hosting a great competiton, hard work with my team member ( <a href="https://www.kaggle.com/hycloud" target="_blank">@hycloud</a> ) and competitors.<br>
It's great competition. I have learned many things from this competition and solutions. that's very happy. </p>
<p>We write our 12th solution as follows.</p>
<h2>Dataset</h2>
<p>Data 1: </p>
<ul>
<li>GLDv2-Clean</li>
<li>Met-Art</li>
<li>Products-10k</li>
<li>Shopee-Products</li>
<li>Food-1k</li>
<li>DeepFashion</li>
<li>Standford-Cars</li>
<li>Storefronts (shared public kaggle dataset)</li>
<li>Furniture (shared public kaggle dataset).</li>
</ul>
<p>Data 2:</p>
<ul>
<li>GLDv2 Full</li>
<li>Met-Art</li>
<li>Products-10k</li>
<li>Shopee-Products</li>
<li>Public Dataset</li>
<li>rp2k</li>
<li>product10k</li>
<li>deepfashion</li>
<li>fashion 200k</li>
<li>myautoge</li>
<li>Storefronts (shared public kaggle dataset)</li>
<li>ifuniture</li>
<li>stanford_online_shop</li>
</ul>
<h2>Model</h2>
<table>
<thead>
<tr>
<th>Model</th>
<th>Loss</th>
<th>Neck</th>
<th>Data</th>
<th>Score(Private/Public)</th>
</tr>
</thead>
<tbody>
<tr>
<td>ViT-Huge-224</td>
<td>Arcface with s=30</td>
<td>1024-4096-BN-PReLU-64</td>
<td>Data1</td>
<td>0.629/0.609</td>
</tr>
<tr>
<td>ViT-Huge-224</td>
<td>Multi Similarity Loss + CrossBatchMemory</td>
<td>1024-4096-BN-PReLU-64</td>
<td>Data2</td>
<td>0.624/0.622</td>
</tr>
<tr>
<td>Swin-Large-384</td>
<td>Arcface with s=30</td>
<td>1536-6144-BN-PReLU-64-BN-PReLU</td>
<td>Data1</td>
<td>0.612/0.602</td>
</tr>
<tr>
<td>Swin-Large-224</td>
<td>Arcface with s=30</td>
<td>1536-6144-BN-PReLU-64-BN-PReLU</td>
<td>Data1</td>
<td>0.576/0.562</td>
</tr>
</tbody>
</table>
<p>Freeze backbone and train only projector for ViT-H; Two-stage for Swin, first stage freeze backbone second stage fine-tune whole model; 10 to 20 epochs for first stage, 1 to 5 epochs for second stage.<br>
Optimizer: SGD, lr 0.1 for first stage, 0.001 for second stage.(ViT-H used Adam)</p>
<h1>Ensemble</h1>
<p>We used ensemble for a final layer using PCA.<br>
We pickup competition distributed dataset for PCA dataset. that's important point. <br>
final result is pickup PCA dataset > full dataset.</p>
<p>Final Score is 0.667/0.654(Private/Public).</p>
<h1>Didn't worked</h1>
<ul>
<li>Stable Diffusion(storefront generation</li>
<li>CLIP 2stage training(Backbone freeze->full)</li>
</ul>
<p>thanks! it's exciting competition. It's fun too.</p> |
Google Universal Image Embedding | 13-th place solution [Knowledge Distillation helps a lot] | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Universal Image Embedding
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thx. Your solution also impresses me a lot. <br>
Take lots of time in past few days with little sleep, time to sleep haha.😴</p>
<p>Congrats to winners. Little pity just silver medal.<br>
My solution can be summarized as following:</p>
<ol>
<li>Dataset: Products10k, google landmark, deepfashion, MET Artwork Dataset, Omni, etc … (waste lots of time in choosing dataset so that have no time to try more ideas.🙁)</li>
<li>Powerful Openclip-Vit-H backbone, linear projection layer(FC,Dropout,BN), arcface loss function</li>
<li>frozen and finetune training strategy: freeze 2/3 transform blocks of Clip and fine-tune 1/3 transform blocks(lr=5e-5) while large learning rate(lr=5e-3) to train the linear projection layer.</li>
<li><strong>Relational Knowledge distillation</strong>, since the embedding dimension is 256 in the above steps. Directly using adaptive average pooling will harm the performance of the model. Thanks to the <a href="https://www.kaggle.com/competitions/google-universal-image-embedding/discussion/336616" target="_blank">this</a>. I found <a href="https://arxiv.org/abs/1904.05068?context=cs.LG" target="_blank">this paper</a>. This paper propose the relational knowledge distillation strategy and work well in several image retrieval tasks. Following this paper, I directly insert a linear projection layer after the original linear projection layer(reduce 256->64) and train the student network.</li>
</ol>
<p>Distillation can bring 0.015~0.03 LB compared to directly adaptive average pooling.<br>
Vit-L avgpooling: 0.608, distillation: 0.623<br>
Vit-L-336 avgpooling: 0.628, distillation: 0.643<br>
Openclip-Vit-H avgpooling: 0.641, distillation: 0.663<br>
(all in publice LB)<br>
<strong>( BUT!!!! Maybe directly trained 64 projection layer also work well, I had no time to do more experiments…☹️). (solved)</strong><br>
update: code (naive version without collating) available at <a href="https://github.com/StarrySky-SHT/GUIE-13th-place-solution" target="_blank">https://github.com/StarrySky-SHT/GUIE-13th-place-solution</a> <br>
update: <strong>directly train 64 projection layer scored 0.644/654 in public/private LB while distillation scored 0.663/0.666 in public/private LB, so it means knowledge distillation do improve the performance of the model.✔️</strong></p>
<p>The mimic I mentioned above, means that the process of knowledge distillation is a mimicry process between student model and teacher model. My understanding is that we can use the KD to make the student model get the comparable performance with original high dimension (256 or larger) even its embed dimension is lower (64). Refer you read the RKD paper. And as for the linear layer you mention, I agree with you, I just simply add a fc layer without thinking too much.</p>
<p>Thx for your response, I will do extra experiments and check which solution (training directly 64D/knowledge distillation) is better in my case.</p>
<p>Yes, strictly speaking, it can't be interpreted as knowledge distillation in this case since I add parameters to the model. But I think the essence of the methods are same, they both use a student model to mimic a teacher model and then get a more powerful student network.</p>
<p>Pleasure it helps!</p> |
Google Universal Image Embedding | 18th Private LB Solution –> Silver | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Universal Image Embedding
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Upscale to 240 and then apply a center crop</p>
<p>Our team would like to deeply appreciate the Kaggle staff and Google for hosting this exciting competition as well as everyone here who compete. My congratulations to all participants!</p>
<p>Special thanks to <a href="https://www.kaggle.com/ammarali32" target="_blank">@ammarali32</a> for hard and effective teamwork. We really enjoyed this competition!</p>
<h2>Key points</h2>
<ul>
<li>Custom dataset</li>
<li>CLIP visual encoder trained on LAION-2B, finetune for 22 epochs</li>
<li>Train the projection layer using only ArcFaceMarginLayer</li>
<li>Use a single projection FC layer with the size of 64 (better than using multiple FC)</li>
<li>TTA – upscale image 224 -> 240 helps to boost score by ~0.003-0.004</li>
<li>Improve train time by embedding pre-extraction before the actual training starts. We found <a href="https://www.kaggle.com/code/motono0223/guie-tensorflow-clip-distancelayer-train-dim64" target="_blank">this technique</a> lately and did not rely on it so much. </li>
</ul>
<h2>Dataset</h2>
<ul>
<li>Google landmark recognition 2021 (7k classes, at most 35 images per class). Later we switched to a cleaned version of the GLR dataset, but, unfortunately, due to Out Of Memory exception happening on a regular basis, we were not able to test it completely</li>
<li>Products10K (9691 classes, 141K images)</li>
<li><a href="https://www.kaggle.com/datasets/maximvlah/edaotkryt" target="_blank">Open Food Facts dataset</a> (2k classes with at least 40 images per class) </li>
<li><a href="https://www.kaggle.com/datasets/prondeau/the-car-connection-picture-dataset" target="_blank">60,000+ Images of Cars</a> (322 classes, 12K images) </li>
<li><a href="https://www.kaggle.com/datasets/liucong12601/stanford-online-products-dataset" target="_blank">Stanford Online Products</a> (take only furniture, at least 10 images per class)</li>
<li><a href="https://www.kaggle.com/datasets/andreybeyn/qudata-gembed-landmarks-210" target="_blank">Landmarks (210)</a> (210 classes, 10K images) </li>
<li><a href="https://www.kaggle.com/datasets/kmader/food41" target="_blank">Food 41</a> (101 classes, 3K images) </li>
<li><a href="https://www.kaggle.com/datasets/kerrit/dishes-67" target="_blank">Dishes 67</a> (67 classes, 670 images)</li>
<li><a href="https://www.kaggle.com/datasets/sainikhileshreddy/food-recognition-2022" target="_blank">Food Recognition 2022</a> (500 classes, 12K images) </li>
<li><a href="https://www.kaggle.com/datasets/kerrit/storefront-146" target="_blank">Storefront</a> (141 classes, 4.5k images) </li>
</ul>
<h2>What did not work</h2>
<ul>
<li>Strong augmentations while training</li>
<li>Unsupervised learning with Contrastive loss</li>
<li>Use images with zero padding for training</li>
<li><a href="https://www.kaggle.com/datasets/rhtsingh/130k-images-512x512-universal-image-embeddings" target="_blank">MET</a>, art history datasets, 130K images </li>
<li>Distance layer, ElasticArcFace</li>
<li>AdaptiveAveragePooling, GeM pooling, Multi-linear layers (neck)</li>
<li>Use a bigger number of images per class / bigger number of classes for the GLR dataset</li>
</ul>
<p>Thanks! 😊 </p> |
Google Universal Image Embedding | 1st place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Universal Image Embedding
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you! See you in the next comp🙂</p>
<p>So the competition is finalized now, and we are so glad to get the 1st place in both public LB and private one. We really appreciate the amazing competition held by the host <a href="https://www.kaggle.com/andrefaraujo" target="_blank">@andrefaraujo</a> and the support provided by Kaggle staffs <a href="https://www.kaggle.com/maggiemd" target="_blank">@maggiemd</a> Contestants who share their wonderful ideas and resources also help me a lot. I may forget to tag some persons but I want to say you guys did really a wonderful job! </p>
<h2>TL;DR</h2>
<p>Here I will not only share how we get the final performance, but I also go through the whole competition, for what difficulties we faced and what approaches we chose to overcome, in the order of time. I hope it will help people better learn the practical things one may face and alternative ways to get over them in future competitions.</p>
<h2>Get started - The first month of the competition</h2>
<p>Honestly We felt doubtful about joining this competition or not. It is so rare for both of us to see a competition w/o a provided datasets. But we finally took it as a challenge for us to get over this task, and decided to join in (Luckily we did so!). </p>
<p>At the beginning, <a href="https://www.kaggle.com/khabel" target="_blank">@khabel</a> and <a href="https://www.kaggle.com/skyyy93" target="_blank">@skyyy93</a> were way ahead of the rest of people in LB, making us eagerly want to know what general orientation they were walking toward to lead to such a giant score gap. We decided to go through the discussion carefully to find some tips or hints to reach a superior score. <a href="https://www.kaggle.com/competitions/google-universal-image-embedding/discussion/340043" target="_blank">https://www.kaggle.com/competitions/google-universal-image-embedding/discussion/340043</a> and a deleted thread claims high score of only using pre-trained model, even like 0.47+, but not claiming the actual name of the pre-trained weights. At the same time we found some thread like [<a href="https://www.kaggle.com/competitions/google-universal-image-embedding/discussion/338505]" target="_blank">https://www.kaggle.com/competitions/google-universal-image-embedding/discussion/338505]</a>, indicating their bad performances after training or fine-tuning. So the first idea was assuming maybe <a href="https://www.kaggle.com/khabel" target="_blank">@khabel</a> and other contestants with higher ranking just uses pre-trained weights w/o training or fine-tuning. So we decided to collect powerful pretrained weights first. From the <a href="https://www.kaggle.com/competitions/google-universal-image-embedding/discussion/340043" target="_blank">https://www.kaggle.com/competitions/google-universal-image-embedding/discussion/340043</a>, we can tell that larger datasets result in better score, since weights trained on ImageNet-22K is better than 1K ones. The best weights of ImageNet-22K gives 0.405, so the 0.47+ one should be trained on much more larger datasets. Searching pre-trained weights on super large datasets, we found a wonderful work CLIP[<a href="https://arxiv.org/abs/2103.00020]" target="_blank">https://arxiv.org/abs/2103.00020]</a>. For the reproductive code, [<a href="https://github.com/mlfoundations/open_clip" target="_blank">https://github.com/mlfoundations/open_clip</a>] was found to be a good choice to begin with. We tried almost every available weights for VIT-L, realizing the best performance come from Laion-400M 31ep, with a score of 0.499. We thought we almost find the most powerful pretrained weight, which still have a gap of like .050 to <a href="https://www.kaggle.com/khabel" target="_blank">@khabel</a> Considering notebooks and discussion like <a href="https://www.kaggle.com/code/moeinshariatnia/random-vs-avg-max-dim-reduction" target="_blank">https://www.kaggle.com/code/moeinshariatnia/random-vs-avg-max-dim-reduction</a>, we want to find whether some tricks may help. We found that random projection works better for weak weights like efficientnet-b0 trained on ImageNet-1K, but harm the performance of strong weights like VIT-L Laion-400M 31ep. The rest methods like PCA, t-SNE is not working in our experiments. We did find some seemingly robust methods to improve the score w/o training, simply using less values from embedding vectors to calculate the mean will boost ~0.010 on both public LB and private LB. The reason we guess is that calculating average from a large amount of values in embedding will let the real feature get vanished. However It seems that is the best we can get, to reach ~0.510 w/o training and fine-tuning. We need to go further, and since that time we believe <a href="https://www.kaggle.com/khabel" target="_blank">@khabel</a> must train on something. </p>
<h2>Let’s dive into training</h2>
<p>Training is not that easy, not only because many contestants claimed they get much lower score by training, but also the strict rule of this competition forbidding people from using datasets with no commercial use guarantee. We tried our best, and went through <a href="https://www.kaggle.com/competitions/google-universal-image-embedding/discussion/337384" target="_blank">https://www.kaggle.com/competitions/google-universal-image-embedding/discussion/337384</a>, but it is still rather hard to find datasets perfectly meet the requirements, and also be believed to help in this competition. Realizing this competition is an extend of the previous series of competitions Google Landmark. We decided to try to first train on GLDv2-clean, unsure if it legal though. Intuitively, the best way may be simply adding a projection layer to downsize the embedding to 64-d, and only fine-tuning the last layer. Since that we used GLDv2, it was reasonable to get to know how the winners of that competition did to reach the best performance. From [<a href="https://www.kaggle.com/competitions/landmark-retrieval-2020/discussion/176037" target="_blank">https://www.kaggle.com/competitions/landmark-retrieval-2020/discussion/176037</a>] and [<a href="https://arxiv.org/abs/2009.05132]" target="_blank">https://arxiv.org/abs/2009.05132]</a>, we supposed arcface may play a vital role here. So we just went simple first, fine-tuning the last layer using arcface with a margin of 0.5 and s of 30. It was surprising that we reached 0.560 after 6 epochs of fine-tuning, pointing out that we are in the right direction.</p>
<p>But we realized it is hard to move on, if we supposed GLD-v2 is on the boarder line, the rest of datasets is just obvious illegal from our view. We strongly hesitate that most teams, including us, may be disqualified because of datasets issue. We opened a thread here [<a href="https://www.kaggle.com/competitions/google-universal-image-embedding/discussion/344643]" target="_blank">https://www.kaggle.com/competitions/google-universal-image-embedding/discussion/344643]</a>, and we glad to see <a href="https://www.kaggle.com/maggiemd" target="_blank">@maggiemd</a> and <a href="https://www.kaggle.com/andrefaraujo" target="_blank">@andrefaraujo</a> response in time to make any public datasets available as long as they are mentioned in the forum. </p>
<p>Getting free from the previous rules, we can try a lot of stuffs to get improved. We always think we should follow this order: Chose datasets -> Decide model, training-related things. So we decided to test various datasets. To maintain a good performance and save time, we added datasets to out training list iteratively instead of training on each datasets from very beginning. We then added Products-10k, Shopee, MET Artwork Dataset, Alibaba goods, H&M Personalized Fashion, GPR1200, GLDv2-Full, DeepFashion - Consumer-to-shop Clothes Retrieval Benchmark part, boosting the score above 0.610. I felt like the score most likely converged since more epochs of fine-tuning did not help improve then. So it is time to do something about the model and training part. I felt like it is hard to do some stuffs about model structure, because a pre-trained weights is very vulnerable to any extra structure added, and even if we come up with a way of that, we don’t think it is a good trade-off regarding computing resources and time. </p>
<p>So moving to the training tricks, we suppose leaving the backbone weights as it is, which was definitely not a good idea. Since now the linear head is well trained, we did not need to worry about the random linear head would affect the backbone weights. So we unfreeze the backbone part, using 10 times lower initial learning rate training the whole part. Just as <a href="https://www.kaggle.com/anonamename" target="_blank">@anonamename</a> mentioned in [<a href="https://www.kaggle.com/competitions/google-universal-image-embedding/discussion/359161]" target="_blank">https://www.kaggle.com/competitions/google-universal-image-embedding/discussion/359161]</a>, we found it is very likely to cause over-fitting, if we also trained on 6 or more epochs as we did on the linear head. Investigating on the phenomenons, we found that when we train on the whole models, the weights of linear projection has jumped sharply. It is weird since linear projection has already converged when we fine-tune on it individually, reminding us of thinking about what is actually the weights of linear projection is. </p>
<p>Given a set of weights of a projection layer F(C, X), <br>
where C denotes the central embedding of each class from class pool X.</p>
<p>From above, the shaking of the F(C, X) reveals that the central embedding of each classes changed rapidly, which results in the Euclidean distance between classes has been largely changed. The Euclidean distance between each class is meaningful indeed, since it indicates the relationships between each classes. The rapid changing of central embedding may be a regardless-of-the-cost adaption to fit for the training loss. We supposed this was one of the root cause of the over-fitting. So we decided to freeze the final fc layer when train on backbone.</p>
<p>Adding dropout to the full connection layer is already a well-known trick to avoid over-fitting of training CNNs or vision transformers. It is not always work in most time, but here we supposed it is worthy to try, because 1) Our model have 2 fc layers, 1 to get embedding in the length of 512, provided originally by VIT, and we added another to downsize it to 64. More fc layers tend not to lose that much information because of dropout. 2) The over-fitting is rather serious here.</p>
<p>We combined the two methods above, with only 3 epochs, and get the model boost to 0.650-0.660.</p>
<p>In the previous training, we found that products-10k is most likely give the most improvement, so we extracted products-10k to fine-tune individually followed the order of “first fc, then backbone”. It reached 0.671. </p>
<p>It seems that it is hard to continue improving, what should kagglers do now:) </p>
<h2>Yes, ENSEMBLE!</h2>
<p>But ensemble seems not working in this competition, as mentioned in [<a href="https://www.kaggle.com/competitions/google-universal-image-embedding/discussion/337360" target="_blank">https://www.kaggle.com/competitions/google-universal-image-embedding/discussion/337360</a>] and [<a href="https://www.kaggle.com/competitions/google-universal-image-embedding/discussion/339554]" target="_blank">https://www.kaggle.com/competitions/google-universal-image-embedding/discussion/339554]</a>. We need to find the reason if we want to solve an issue. Why did ensemble not work here? I fully agree with <a href="https://www.kaggle.com/manwithaflower" target="_blank">@manwithaflower</a> ‘s point of view indicated in [<a href="https://www.kaggle.com/competitions/google-universal-image-embedding/discussion/340546#1876558]" target="_blank">https://www.kaggle.com/competitions/google-universal-image-embedding/discussion/340546#1876558]</a>. Recalling the stuffs we found about the F(C, X), if we got similar F(C, X) in different models, then the ensemble of those models should work. </p>
<p>How to achieve this? Basically we have two ways:</p>
<p>First, if we fine-tune models in different hyper-parameters, like model soups [<a href="https://arxiv.org/abs/2203.05482]" target="_blank">https://arxiv.org/abs/2203.05482]</a>, but in the models ensemble style, it will likely work since the F(C, X) won’t change a lot. That is what we tried, and we got boost to 0.680 with ensemble of the resolution 224 and 280. </p>
<p>Second, if we tried to keep F(C, X) the same with a maintaining in performance, we can also get models ensemble work, and even achieve better performance than the first way, because the variance of the models can be huge which help ensemble. The way to did it is based on an <br>
assumption that the different spaces represented by F(C, X) of different models can be aligned through a linear operation. Like below:</p>
<p>If we have,<br>
y^ = F1(C, X) * B1(W, x), and,<br>
y^ = F2(C, X) * B2(W, x),<br>
Then ∃ G(C1, C2), such that,<br>
y^ = F1(C, X) * G(C1,C2) * B2(W, x)</p>
<p>So all we need is to train a G(*) on the models to do the ensemble. This is what we want to further investigate after this competition.</p>
<p>Things is really interesting till here, then a good catch by <a href="https://www.kaggle.com/dschettler8845" target="_blank">@dschettler8845</a> in [<a href="https://www.kaggle.com/competitions/google-universal-image-embedding/discussion/353768" target="_blank">https://www.kaggle.com/competitions/google-universal-image-embedding/discussion/353768</a>] breaks that. LOL (but it is really an informative catch!)</p>
<p>I need to redo EVERY THING MENTIONED ABOVE to the new laion-2b VIT-H model thanks to this weight:(, except several changes: 1) drop model ensemble, VIT-H is really a huge guy 2) train on all the datasets at the same time, drop products-10k, leave products-10k as the final fine-tuning datasets.</p>
<p>Then it reaches a score of 0.703. After fine-tuning on 280, it reaches 0.705. Here are some tricks, training again through all the process from last layer to backbone in 280 with less epochs and lower learning rate, then it gives 0.723. Here is an alternative way not to turn higher the resolution, but gives some overlap to the patch. It gives the best performance in the last day to reach public LB with 0.732 and private LB 0.728 with 290 resolution and 4 pixels overlapping. </p>
<p>Hope this will help someone to better perform in Kaggle Competitions. See you guys in the next comp, Happy Kaggling!</p>
<p>Github repository: <a href="https://github.com/LouieShao/1st-Place-Solution-in-Google-Universal-Image-Embedding" target="_blank">https://github.com/LouieShao/1st-Place-Solution-in-Google-Universal-Image-Embedding</a><br>
Paper: <a href="https://arxiv.org/abs/2210.08473" target="_blank">https://arxiv.org/abs/2210.08473</a></p>
<p>Thank you:D</p>
<p>Glad to hear that help!</p>
<p>And feel free to let me know if we can help in your future work about this:)</p>
<p>Hi Andre! </p>
<p>I trained on all the datasets <strong>all at once</strong>, since many papers especially about network adaption claims <strong>that it is worth to re-train the whole model after we found the optimal architecture through training.</strong> Same things here, but for <strong>datasets</strong>. We added different datasets to our training list one by one ONLY in the datasets selection process. We only keep datasets that leads to boosting on our score. In this way we got our final list of datasets to be used. After we got this, we re-trained the model on all the datasets to get the best performance, only to leave products-10k to be the last part training.</p>
<p>Again, thanks for hosting such amazing competition! (Will we have a season 2?)</p>
<p><code>Would you know what is the resulting number of training images and classes?</code><br>
Will check tommorow when I can reach my laptop. But you can simply summarize them up, since no sampling stuffs happen here. </p>
<p><code>And during training, I guess you are mixing data from all datasets, and shuffling the entire set of images so that each batch could have images from several different domains (is this correct?)</code><br>
That’s correct. We sample each mini-batch randomly. </p>
<p><code>Did you encounter any issues where maybe one dataset would be much larger than another, and lead to biasing the embedding too much towards some image types?</code></p>
<p>We did not dive into this. But for domains things, nearly 30% of domains we even did not select any datasets, since they are too small to affect score. </p>
<p>So here goes an assumption. If given a dataset A is ‘much larger’ than B, then B should be smaller than like 0.1M. (Since we don’t have datasets like Laion-2B, a datasets is much larger than another only occur when a datasets is super small) And for a super tiny datasets, we don’t care if it got influenced, since it basically has no contribution to the final score. ( maybe a little boost, even in that case we will also first guarantee the big datasets get well trained, regardless the influences to small ones)</p>
<p>I will keep training other idea, since my training time is far longer than 9 hrs. But in most cases, I recommend you to test various idea during that time if you have robust CV.</p>
<p>Thanks! I’m working on that, for another 2K words😂</p>
<p>Sure. Single node with 4 A100 GPUs. The whole training process should cost ~20 days.</p>
<p>Thanks for your kind words!</p>
<p>Thank you! For your question, both, since they are both linear operation, which can be reduced to one single operation, and there has no non-linear activation between them. </p> |
Google Universal Image Embedding | 25th solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Universal Image Embedding
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks for organizers hosting a nice competition.</p>
<p>Initially I was working on this competition with text-image contrastive method and trying dimension reduction technique like UMAP and also trying embedding each category(fashion/package/landmark…) on different discrete spaces, but these approach were not good.</p>
<p>The approach of <a href="https://www.kaggle.com/code/motono0223/guie-clip-tensorflow-train-example" target="_blank">motono0223's baseline</a> was best for me. freezed CLIP + Arcface head.</p>
<h2>Downsampling for the number of classes</h2>
<ul>
<li>64D embedding space is not large so I tuned the number of classes for training.</li>
<li>Training 4000 classes from Products10k/GLR was the best for me.<br>
<img src="https://user-images.githubusercontent.com/61892693/195221946-f85f5e92-b57f-4315-bee9-4c9956d71a09.png" alt="image"></li>
</ul>
<h2>Hard to create good CV-LB correlation</h2>
<ul>
<li>I configured 6 different retrieval tasks, Products10k/GLR/Stanford Online Products/DeepFashion/MET/Food-101/ObjectNet but could not find clear CV-LB correlation.</li>
<li>When only training with Products10k/GLR, Stanford Online Products(Cabinet/Sofa/Chair) retrieval setting was relatively correlated to LB. But it was still not perfect one.<br>
<img src="https://user-images.githubusercontent.com/61892693/195222493-90ad0fef-b80a-46b2-bf9c-2e8f57028a3c.png" alt="image"></li>
</ul>
<h2>Freeze CLIP</h2>
<ul>
<li>Unfreeze CLIP finetune does not worked for me. Lowering learning rate got worked but freeze & high learning rate was better.</li>
<li>Fintuning other backbones, like Imagenet pretrained models, were not good.</li>
<li>Adding more transformer layers on freezed CLIP was not good.</li>
</ul>
<h2>My code for this competition</h2>
<ul>
<li><a href="https://github.com/Fkaneko/kaggle_google_universal_image_embedding" target="_blank">https://github.com/Fkaneko/kaggle_google_universal_image_embedding</a></li>
</ul> |
Google Universal Image Embedding | 28th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Universal Image Embedding
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi <a href="https://www.kaggle.com/rasoulmojtahedzadeh" target="_blank">@rasoulmojtahedzadeh</a>,</p>
<p>Our TF model consisted - besides the input layers and CLIP model itself - of only the 64 units dense layers that was used as embedding layer.</p>
<p><code>tf.keras.layers.Dense(64, name = 'embedding_dense_layer')</code></p>
<p>On each epoch we would save the weights of this layer through a Callback with custom code.</p>
<pre><code>class SaveModelCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs=None):
embed_layer_object = self.model.get_layer('embedding_dense_layer')
embed_layer_kernel0 = embed_layer_object.weights[0]
np.save('embed_layer_kernel0.npy', embed_layer_kernel0.numpy())
embed_layer_bias0 = embed_layer_object.weights[1]
np.save('embed_layer_bias0.npy', embed_layer_bias0.numpy())
</code></pre>
<p>In our PyTorch Inference model we would then setup the model in a similar way and we would load the exported weights into the Pytorch layer.</p>
<p>so something like this.</p>
<pre><code>class EmbeddingModel(nn.Module):
def __init__(self, target_size = [224, 224], normalize = True):
super().__init__()
self.target_size = target_size
self.model = CLIPVisionModel.from_pretrained("laion/CLIP-ViT-H-14-laion2B-s32B-b79K")
self.final = nn.Linear(1280, 64, bias = True)
self.normalize = normalize
def forward(self, x):
x = x.to(device)
x = x / 255.0
x = transforms.functional.normalize(x, mean = [0.48145466, 0.4578275, 0.40821073], std = [0.26862954, 0.26130258, 0.27577711])
x = transforms.functional.resize(x, size=self.target_size, interpolation = InterpolationMode.BICUBIC)
model_output = self.model(x)
image_embeds = model_output.pooler_output
image_embeds = image_embeds / image_embeds.norm(p=2, dim=-1, keepdim=True)
x = image_embeds
x = self.final(x)
if self.normalize:
x = torch.nn.functional.normalize(x)
return x
</code></pre>
<p>The tensorflow exported weights would then be loaded as follows.</p>
<pre><code>tf_embed_layer_bias = np.load("../input/embeddingfiles/embed_layer_bias0.npy")
tf_embed_layer_weights = np.load("../input/embeddingfiles/embed_layer_kernel0.npy")
model = EmbeddingModel(target_size = [224, 224], normalize = True)
model.final.weight.data = torch.from_numpy(tf_embed_layer_weights.T).to(device)
model.final.bias.data = torch.from_numpy(tf_embed_layer_bias).to(device)
</code></pre>
<p>We could use it this way since we didn't further finetune the CLIP backbone.</p>
<p>Hopes this answers your question.</p>
<p>Hi <a href="https://www.kaggle.com/rasoulmojtahedzadeh" target="_blank">@rasoulmojtahedzadeh</a> Yes fortunately it does. It is even able to run the G-14 version on inference time. That one has over a billion parameters.</p>
<p>The performance is however less thatn that of the H-14 model. The G-14 model was only trained for 1/3 of the epochs.</p>
<p>First of all we would like to thank Kaggle / Google for a very interresting competition. Also many thanks to my teammate <a href="https://www.kaggle.com/mightyrains" target="_blank">@mightyrains</a> for the excellent cooperation and teamwork.</p>
<p><strong>Our solution</strong><br>
Our solution consisted of Tensorflow training pipeline that ran on the Kaggle TPU's. We also used Colab Pro additionally because of the Kaggle 20 hours TPU limit. This proved to be a very good combination todo many experiments. </p>
<p>We used the CLIP-ViT-H-14 model as backbone. As projection layer we use a 64 dense units linear layer that was trained with ArcFace (s = 30, m = 0.30). Augmentation consisted of the usual stuff like: Horizontal/vertical flips, rotations and randomly modifying saturation, brightness, hue and contrast.</p>
<p>The majority of training runs consisted of 30 to 40 epochs where each epoch consisted of 20% of all training data.</p>
<p>For datasets the core consisted of Products10K data and the Google Landmarks data. In our final solution we added some small datasets such as the Food Recognition, Stanford Cars dataset and the Best Artsworks of All Time. Looking at the private LB the Products10K and Landmarks on itself already seem to give a very good result. Depending on the training run the additional datasets could give a slight increase or decrease of the results. This is likely caused by the randomness in each training run.</p>
<p>For the Tensorflow CLIP VIT H-14 model we ran into the issue eventually that after submission it would error on compute resources used. We solved this in a simple way by exporting the weights from the projection layer and importing these into a similar setup PyTorch model that we used for Inference and submission.</p>
<p>We looked at setting up a complete training pipeline in PyTorch but with the easy way we could use the Tensorflow/Pytorch combination we now had we decided to keep it that way.</p>
<p>Our selected solutions scored 0.653 on the Private LB. Our best solution on the Private LB scored 0.658. </p>
<p><strong>What didn't work / didn't seem to work / partially failed ;-)</strong></p>
<ol>
<li>We had a hard time getting a good local validation strategy to work. There was to little consistency with the LB score. Because of little time available in the end we didn't further invest time in it.</li>
<li>We tried various datasets like Omni Benchmark and webscraped images. With embeddings generated by a base CLIP model we tried to create clusters of images on which to further train a CLIP model with the idea to further increase performance. In general this didn't work…although it is interresting to note that performance also not really dropped. Using clean and clear datasets with proper labels was the best.</li>
<li>We did some tests with ensembles of models. This didn't seem to work on LB….with hindsight this did work however. In the next competition this will defintely get more focus.</li>
<li>Early in the competition we already did experiments with also training the backbone. At that moment it didn't give us proper results so we didn't follow up on that. Reading the various other solutions where this was applied succesfully this is something we will retry with a few Late Submissions.</li>
</ol>
<p>Overall we can look back at a very exciting and challenging competition which we enjoyed very much.</p>
<p><a href="https://www.kaggle.com/rasoulmojtahedzadeh" target="_blank">@rasoulmojtahedzadeh</a> Oh I think Pytorch was used plenty enough….just may'be less visible in public kernels this time…but I haven't checked.</p>
<p>I think Pytorch is some more efficient with memory which makes the difference. We noticed when doing Inference with the TF version of the H-14 model that everytime it would get errors after about 8hours…so way within the time limit. We did some local checks with the speed that TF could achieve and it seemed that errors were around the time the embeddings for all images where generated. Presumable after that the KNN proces in the Kaggle submission would start.</p>
<p>We tried to solve it with TF but switching that part to Pytorch was relatively ease.</p>
<p>Hi <a href="https://www.kaggle.com/evilpsycho42" target="_blank">@evilpsycho42</a> Thanks! Yes combination of TPU and GPU from Kaggle + Colab. We used the model weights as available in <a href="https://huggingface.co/laion/CLIP-ViT-H-14-laion2B-s32B-b79K" target="_blank">https://huggingface.co/laion/CLIP-ViT-H-14-laion2B-s32B-b79K</a></p> |
Google Universal Image Embedding | 37-th solution - Self Supervised Approach | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Universal Image Embedding
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I know that is not the best solution but I want to share this very simple approach to score 6.37 in private LB. This is the recipe:<br>
1) Freezed backbone of openCLIP ViT-H-14 on LAION-2B</p>
<p>2) Trainable linear head (no batch normalization, no dropout)</p>
<p>3) Training in self supervised learning on 130k dataset:<br>
<a href="https://www.kaggle.com/datasets/rhtsingh/130k-images-512x512-universal-image-embeddings" target="_blank">https://www.kaggle.com/datasets/rhtsingh/130k-images-512x512-universal-image-embeddings</a></p>
<ul>
<li><p>contrastive loss:<br>
<a href="https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#contrastiveloss" target="_blank">https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#contrastiveloss</a><br>
(NTXentLoss also produce similar results)</p></li>
<li><p>image batch are simply created as:<br>
[batch, augment(batch)] (simclr style)<br>
where augment is composed by:<br>
randomcrop, randomrotation, randomblur, colojitter, cutout</p></li>
<li><p>Total batch size of 512 and adam as optimizer</p></li>
<li><p>30 epochs</p></li>
</ul>
<p>4) TTA using square padding (zero filled) and simple 224 resize</p>
<p>To conclude: maybe with a larger dataset the score can be improved but I have no time to test it. Hope that this could be helpful.</p> |
Google Universal Image Embedding | 39th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Universal Image Embedding
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you for competiton hosts and competitors!</p>
<p>This is 39th place solution.</p>
<h1>Model</h1>
<p>Ensemble 2 heads using pytorch.</p>
<ul>
<li>Backbone : ViT-H-14 + LAION2B</li>
<li>Head1 : Dropout(0.1) -> Linear(64) -> Normalize</li>
<li>Head2 : Linear(64) -> Normalize</li>
<li>Output : Head1 + Head2 (add)</li>
</ul>
<p>ViT -> Head1 / Head2 -> Normalize -> Ensemble</p>
<p>In training, backbone was freezed and trained head module using arcface.</p>
<h1>Dataset</h1>
<ul>
<li>Head1 : GLR2021 + ImageNet1K + Products10K</li>
<li>Head2 : GLR2021 + ImageNet1K + Products10K + AlibabaGoods + FoodRecognition2022 + MET</li>
</ul>
<h1>What did not work</h1>
<p>some trial was by tensorflow implementation.</p>
<ul>
<li>ArcFace (large s + small margin)</li>
<li>SubCenter ArcFace (Some K values)</li>
<li>AdaCos</li>
<li>Head : SelfAttention + Linear</li>
<li>Head : Linear, ReLU, Dropout, Linear</li>
<li>Head : Linear, ReLU, Dropout, Linear, BatchNormalization</li>
<li>Contrastive Learning (SimCLR, SupCon)</li>
<li>Focal Loss</li>
<li>Single model or ensemble other CLIP(RN50, ViT-B, ViT-L (224px, 336px), ViT-g), CNN, other Head based arcface</li>
<li>Ensemble method (concat + AveragePooling, weighted add)</li>
<li>Using other dataset </li>
</ul>
<h1>Question</h1>
<ul>
<li>DataAugmentation is not good?</li>
<li>Ensemble of concatenating channel axis looked good, but score was lower than simple 64 dim<br>
model1 -> 32dim, model2 -> 32dim, ensemble (64 dim)</li>
<li>Instance level label is not good?</li>
<li>2 stage classification is good? 1 stage : genre classsification , 2 stage : embedding (I had no time to try)</li>
</ul> |
Google Universal Image Embedding | 5th place solution[NS embedding] | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Universal Image Embedding
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>sample instances are sampled with <code>.sample(random_state=0)</code> as follows</p>
<pre><code># glr2021 sampling
labels_glr = df_glr["label"].unique()
labels_glr = pd.Series(labels_glr).sample(n=len(labels_glr) - 2000, random_state=0)
df_glr = df_glr[df_glr["label"].isin(labels_glr)]
# GPR1200 sampling
df_GPR1200 = df_GPR1200[df_GPR1200["category"] != "iNat"]
# food101 sampling
df_food101 = df_food101.groupby("label").sample(n=30, random_state=0)
</code></pre>
<p>batch size is 256</p>
<p>※2022/10/12 Title changed due to change in ranking.<br>
※2022/10/18 Add solution code: <a href="https://github.com/riron1206/kaggle-Google-Universal-Image-Embedding-Competition-5th-Place-Solution" target="_blank">https://github.com/riron1206/kaggle-Google-Universal-Image-Embedding-Competition-5th-Place-Solution</a><br>
※2022/10/19 Add arXiv paper: <a href="https://arxiv.org/abs/2210.09495" target="_blank">https://arxiv.org/abs/2210.09495</a></p>
<p>Thank you to the hosts and the organizers for having such a challenging competition! <br>
Thinking about good image embedding was a lot of fun. And thank you to my teammates and all the participants for hard working!</p>
<h1>Key Points</h1>
<ul>
<li>Dataset tuning</li>
<li>CLIP visual encoder trained on LAION-2B</li>
<li>head only training with ArcFace </li>
<li>strong regularization (weight_decay=0.1)</li>
<li>use feature (normalized original height, original width, original aspect ratio)</li>
<li>TTA</li>
<li>resize antialias=True</li>
</ul>
<h1>Dataset</h1>
<ul>
<li><a href="https://www.kaggle.com/datasets/motono0223/guie-glr2021mini-tfrecords-label-10691-17690" target="_blank">motono0223's google landmark recognition 2021</a><ul>
<li>Randomly deleted data for 2000 classes</li></ul></li>
<li><a href="https://www.kaggle.com/datasets/motono0223/guie-products10k-tfrecords-label-1000-10690" target="_blank">motono0223's products10k</a><ul>
<li>All data were used</li></ul></li>
<li><a href="https://github.com/Visual-Computing/GPR1200" target="_blank">GPR1200</a><ul>
<li>Deleted 200 classes of iNaturalist data</li></ul></li>
<li><a href="https://www.kaggle.com/datasets/srujanesanakarra/food101" target="_blank">food101</a><ul>
<li>Reduced the number of data per class to 30</li></ul></li>
</ul>
<h1>Validation</h1>
<ul>
<li>In order to see the performance of zeroshot, we prepared a test set with only the data of classes that do not exist in train and validation<ul>
<li>train_valid test split: <code>StratifiedGroupKFold(n_splits=20).split(X, y=< used Dataset >, groups=< class_label >))</code></li>
<li>train valid split: <code>StratifiedKFold(n_splits=20).split(X, y=< class_label >))</code></li></ul></li>
<li>In the submit, we used the weights of the epochs with higher MAP5 in the test set</li>
</ul>
<h1>Model</h1>
<p>It is very important to use ViT trained with LAION-2B from <a href="https://github.com/mlfoundations/open_clip" target="_blank">OpenCLIP</a> for the backbone of ArcFace. OpenCLIP is more powerful than the official <a href="https://github.com/openai/CLIP" target="_blank">openai/CLIP</a>.</p>
<p>Also, adding image size information to embedding improved public lb by about 0.003.</p>
<ul>
<li><strong>backbone: OpenCLIP ViT-H/14 224x224</strong></li>
<li>head: BatchNorm1d(num_features=1024+3, affine=False) -> Linear(num_features=1024+3, out_features=64)<ul>
<li>num_features is set to +3 because normalized original height, original width, original aspect ratio are concatenated in the embedding of the backbone</li></ul></li>
<li>loss: ArcFace(scale=30.0, margin=0.5)</li>
</ul>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1694617%2F0a169ad601296aabab75c6e1a80fd619%2Fmodel_architecture_loss_change_meta_info_del.png?generation=1665458129592589&alt=media" alt="model_architecture"></p>
<h1>Training</h1>
<p>It is important to apply strong regularization(weight decay) to the model parameters to avoid overfitting the training data.</p>
<ul>
<li><p><strong>freeze backbone parameters and train only head</strong></p></li>
<li><p>optimizer: SAM(AdamW(<strong>weight_decay=0.1</strong>))</p></li>
<li><p>scheduler: warmup 0-3epoch(lr=0.0001->0.01) + Cosine Annealing 3-1000epoch(lr=0.01->0.0001)</p></li>
<li><p>augmentation: we used Albumentations follow OpenCLIP augmentations</p></li>
</ul>
<pre><code> A.Compose([
A.RandomResizedCrop(height=224, width=224, scale=(0.9, 1.0), ratio=(1.0, 1.0)),
A.Normalize(mean=[0.48145466, 0.4578275, 0.40821073], std=[0.26862954, 0.26130258, 0.27577711], max_pixel_value=255.0, p=1.0),
ToTensorV2(),
])
</code></pre>
<h1>TTA</h1>
<p>Since lb improved when images with different heights and widths were resized keeping the aspect ratio, TTA was switched according to the aspect ratio of the image as follows.</p>
<p>In addition, we switched TTA with a classification model. The classification model was trained with <a href="https://www.kaggle.com/datasets/rhtsingh/130k-images-512x512-universal-image-embeddings" target="_blank">130k-images-512x512-universal-image-embeddings</a>.</p>
<pre><code>import torch
import torch.nn as nn
from torchvision import transforms
class CLIPTTA(nn.Module):
def __init__(self, encoder, neck, classifier):
super().__init__()
self.encoder = encoder # torchscript OpenClip ViT
self.neck = neck # Model neck(head)
self.classifier = classifier # Classifier Model
...
def concat_image_info(self, emb, H: int, W: int):
subfeature = torch.tensor([[
float(H/W),
float(H/224.0),
float(W/224.0),
]], device=emb.device).float()
return torch.cat([emb, subfeature], dim=1)
def forward(self, x):
B,C,H,W = x.shape
ratio = W/H
if ratio==1.0:
x_resize = self.ccrop_resize(x,cratio=0.9) # center_crop_resize
feat = self.encoder(x_resize) # OpenClip ViT embedding
feat = self.concat_image_info(feat, H, W)
feat = self.head(feat)
return torch.nn.functional.normalize(feat)
else:
x_resize = self.ccrop_resize(x,cratio=1.0) # resize
feat1 = self.encoder(x_resize) # OpenClip ViT embedding
cls = torch.argmax(self.classifier(feat1)) # Classification
feat1 = self.concat_image_info(feat1, H, W)
feat1 = self.head(feat1)
# cls: apparel=0, packaged=8, toys=10
if cls == 0 or cls == 8 or cls == 10:
x_pad = self.pad_ccrop_resize(x,cratio=0.9) # keep aspect center_crop_resize
feat2 = self.encoder(x_pad) # OpenClip ViT embedding
feat2 = self.concat_image_info(feat2, H, W)
feat2 = self.head(feat2)
x_pad = self.pad_ccrop_resize(x,cratio=1.0) # keep aspect resize
feat3 = self.encoder(x_pad) # OpenClip ViT embedding
feat3 = self.concat_image_info(feat3, H, W)
feat3 = self.head(feat3)
return torch.nn.functional.normalize((feat1+feat2+feat3)/3.0)
else:
x_pad = self.pad_ccrop_resize(x,cratio=0.9) # keep aspect center_crop_resize
feat2 = self.encoder(x_pad) # OpenClip ViT embedding
feat2 = self.concat_image_info(feat2, H, W)
feat2 = self.head(feat2)
return torch.nn.functional.normalize((feat1+feat2)/2.0)
</code></pre>
<p>Also, setting <code>antialias=True</code> in torchvision's resize improved lb. This is because antialias is active when the openclip model is trained.</p>
<pre><code>torchvision.transforms.functional.resize(x, [224, 224],interpolation=transforms.InterpolationMode.BICUBIC, antialias=True)
</code></pre>
<h1>LB update history</h1>
<p>The yellow line is the border of the GOLD MEDAL.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1694617%2F576cf28ca33ef0699ce3f8a86512c4f9%2Flb_update_goldline_meta_info_del.png?generation=1665458200593557&alt=media" alt="lb_update"></p>
<h1>What did not work</h1>
<ul>
<li>Other Dataset (<a href="https://www.kaggle.com/datasets/motono0223/guie-imagenet1k-mini1-tfrecords-label-0-999" target="_blank">imagenet1k</a>, <a href="https://www.kaggle.com/datasets/dschettler8845/the-met-dataset" target="_blank">MET</a>, <a href="https://github.com/ZhangYuanhan-AI/OmniBenchmark" target="_blank">OmniBenchmark</a> etc…)</li>
<li>fine tuning(<a href="https://arxiv.org/abs/2202.10054" target="_blank">LP-FT</a>)</li>
<li>ArcFace + PCA (compress output of model to 64 dimensions)</li>
<li>Mixup</li>
<li>sub-center ArcFace, ElasticFace, AdaFace, DistanceMarginLayer</li>
<li>multi-layered head</li>
<li>Concat output of timm model to embedding</li>
<li>Concat output of ViT projection layer to embedding</li>
<li>Model ensemble (average embedding of 64 dimensions)</li>
<li>and more…..</li>
</ul> |
Google Universal Image Embedding | 9th place solution: finetune CLIP ViT-H/14 | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Universal Image Embedding
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I thank the organizers for this great competition!</p>
<p>My solution is heavily based on <a href="https://www.kaggle.com/motono0223" target="_blank">@motono0223</a>'s work. Thank you very much!<br>
<a href="https://www.kaggle.com/code/motono0223/guie-clip-tensorflow-train-example" target="_blank">https://www.kaggle.com/code/motono0223/guie-clip-tensorflow-train-example</a></p>
<p><a href="https://www.kaggle.com/code/akihirok/9th-place-guie-fintune-tf-clip-trainable/notebook" target="_blank">* My code is now public *</a></p>
<h2>Key improvements over the original work</h2>
<ul>
<li>Input 64D vectors to ArcFace (instead of 256D ones) (~ 0.01 gain)</li>
<li><a href="https://laion.ai/blog/large-openclip/" target="_blank">Open CLIP ViT-H/14 on LAION-2B</a> (~ 0.08 gain)</li>
<li>Reduce LR linearly (~ 0.01 gain)</li>
<li>Finetune backbone CLIP model for 1 epoch w/ low LR w/o DA (~ 0.02 gain)<ul>
<li>1 epoch = 10 epochs in the <a href="https://www.kaggle.com/motono0223" target="_blank">@motono0223</a>'s work</li></ul></li>
</ul>
<h2>Dataset</h2>
<p>I indeed thought that finding nice datasets was the most important thing to win the competition.<br>
However, I eventually used the same datasets as the original work.<br>
I couldn't find any other dataset which works well. 😭</p>
<p>Products-10K > Landmark Retrieval 2021 > ImageNet-1K > Deep Fashion = Food Recognition > others (in my best knowledge)</p>
<h2>Note</h2>
<ul>
<li>I suffered from random OOM errors in submission (with CLIP H/14). I gave up more complex architecture because of this. (why you can do an ensemble or TTA? 🤔)</li>
</ul>
<p>Thank you for reading!</p>
<p>PyTorch was all I needed 😂</p>
<p>I didn't experience that actually.<br>
Try the notebook I shared 😃</p> |
Google Universal Image Embedding | GUIE 4th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Universal Image Embedding
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>GUIE 4th Place Solution</h1>
<p>A quick thank you to my teammates <a href="https://www.kaggle.com/simjeg" target="_blank">@simjeg</a> and Marcos it was great working with you. It was a lot of fun exploring different datasets and dimensionality reduction techniques as a part of a team. Finally, thank you to the team of people that hosted this competition.</p>
<p>Repository : <strong><a href="https://github.com/IvanAer/G-Universal-CLIP" target="_blank">G-Universal-CLIP</a></strong> 🔥 🚀</p>
<h3>TLDR Brief Summary</h3>
<p>We use as baseline the content from the famous repo <a href="https://github.com/mlfoundations/open_clip" target="_blank">https://github.com/mlfoundations/open_clip</a></p>
<p>Our solution is an ensemble of 2 CLIP models trained using sub-center ArcFace with adaptive margin loss, each model produces a 512D descriptor. A total of 9 models were trained consisting of 4 ViT-L-14-336 models and 5 ViT-H-14 models. <br>
The weights of these 9 models were then averaged to make 2 “model soups”. </p>
<p>We created a 1024 dimensional embedding using the concatenated outputs of both models. We then used PCA to reduce the dimensionality from 1024 to 64.</p>
<h4>CLIP Models</h4>
<p>Pre-trained models from <a href="https://github.com/mlfoundations/open_clip" target="_blank">open_clip</a><br>
ViT-L-14-336 : openai<br>
ViT-H/14 on LAION-2B laion_s32b_b79k</p>
<h4>Training Data</h4>
<p>Google Landmarks 2020 : <a href="https://www.kaggle.com/competitions/landmark-recognition-2020/data" target="_blank">https://www.kaggle.com/competitions/landmark-recognition-2020/data</a></p>
<p>Products-10k : <a href="https://www.kaggle.com/competitions/products-10k/data" target="_blank">https://www.kaggle.com/competitions/products-10k/data</a></p>
<p>We did not find it beneficial to use more data for finetuning. Products and landmarks already represent 50% of this challenge data distribution.</p>
<h4>Training Routine</h4>
<p>We provide the details in <strong><a href="https://github.com/IvanAer/G-Universal-CLIP" target="_blank">G-Universal-CLIP</a></strong> and our paper! Stay tuned</p>
<p>Keep in mind we use ObjectNet for offline validation only, as discussed in this post : <a href="url" target="_blank">https://www.kaggle.com/competitions/google-universal-image-embedding/discussion/352644</a></p>
<h3>Single Model</h3>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5116959%2Fec7ebd70ca3c485ef7d1f2606845fc1f%2FViT-Model.jpg?generation=1665563516204970&alt=media" alt=""></p>
<h3>Model Ensemble</h3>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5116959%2F0bf93302aa021538c8e454e742778477%2FViT-Model-Ensemble.jpg?generation=1665563545588984&alt=media" alt=""></p>
<h4>Model Soup / Ensembles</h4>
<p>When creating the model soup we had to find a way to diversify the models before averaging their weights. It was decided that we would train each model with a different augmentation technique on each run, then train one model without augmentations. For the final ensemble we used the following augmentation methods:</p>
<p>These augmentations were taken from this solution : <a href="https://www.kaggle.com/competitions/happy-whale-and-dolphin/discussion/319896" target="_blank">https://www.kaggle.com/competitions/happy-whale-and-dolphin/discussion/319896</a></p>
<pre><code>def transforms_happy_whale(image_path, image_size):
image = cv2.imread(image_path)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB).astype(np.uint8)
aug8p3 = A.OneOf([
A.Sharpen(p=0.3),
A.ToGray(p=0.3),
A.CLAHE(p=0.3),
], p=0.5)
train_transforms = A.Compose([
A.ShiftScaleRotate(rotate_limit=15, scale_limit=0.1, border_mode=cv2.BORDER_REFLECT, p=0.5),
A.Resize(image_size, image_size),
aug8p3,
A.HorizontalFlip(p=0.5),
A.ColorJitter(brightness=0.1, contrast=0.1, saturation=0.1),
ToTensorV2(),
])
return train_transforms(image=image)['image']
</code></pre>
<p>These augmentations were taken from this solution : <a href="https://www.kaggle.com/competitions/landmark-recognition-2020/discussion/187757" target="_blank">https://www.kaggle.com/competitions/landmark-recognition-2020/discussion/187757</a></p>
<pre><code>def transforms_cutout(image_path, image_size):
image = cv2.imread(image_path)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB).astype(np.uint8)
train_transforms = A.Compose([
A.HorizontalFlip(p=0.5),
A.ImageCompression(quality_lower=99, quality_upper=100),
A.ShiftScaleRotate(shift_limit=0.2, scale_limit=0.2, rotate_limit=10, border_mode=0, p=0.7),
A.Resize(image_size, image_size),
A.Cutout(max_h_size=int(image_size * 0.4), max_w_size=int(image_size * 0.4), num_holes=1, p=0.5),
ToTensorV2(),
])
return train_transforms(image=image)['image']
</code></pre>
<p>These augmentations are taken from this paper : <a href="https://arxiv.org/abs/1805.09501" target="_blank">https://arxiv.org/abs/1805.09501</a></p>
<pre><code>def transforms_auto_augment(image_path, image_size):
image = Image.open(image_path).convert('RGB')
train_transforms = transforms.Compose([transforms.AutoAugment(transforms.AutoAugmentPolicy.IMAGENET), transforms.PILToTensor()])
return train_transforms(image)
</code></pre>
<p>We created 4 ViT-L-14-336 models using the above augmentation methods with one model trained without augmentations (all 1 epoch).</p>
<p>We created 5 ViT-H-14 models using the above augmentation methods with one model trained without augmentations (all 1 epoch) and an extra model trained on the auto augment transforms for 2 epochs. </p>
<p>We then averaged these weights and were left with a single ViT-L-14-336 and a single ViT-H-14.</p>
<h3>Final Remarks</h3>
<p>CLIP has a lot of potential in the future and it will be interesting to see where it goes. I am excited to share with everyone the full details in the paper that will soon be published. If you want to learn more, please make sure you check out the repository we have set up. It contains the training code used by our team.</p>
<p>The paper will be available by Monday : <a href="https://arxiv.org/abs/2210.11141" target="_blank">https://arxiv.org/abs/2210.11141</a><br>
Repository : <strong><a href="https://github.com/IvanAer/G-Universal-CLIP" target="_blank">G-Universal-CLIP</a></strong></p>
<p>Roughly 2.5 hours for ViT-H-14 and a little over 3 hours for ViT-L-14-336. This is with a single RTX 3090. </p>
<p>Shouldn't you be using captions as the labels?</p>
<p>If you have the captions you can just train Clip the normal way.</p>
<p>Sorry, this reply is a little late. </p>
<p>PCA should be calculated on the training portion of the dataset. Then validated on the validation portion.</p> |
Google Universal Image Embedding | Private 55th Solution and Lessons Learned | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Universal Image Embedding
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to Kaggle and Google for this amazing competition, <a href="https://www.kaggle.com/evilpsycho42" target="_blank">@evilpsycho42</a> and I enjoyed our time a lot in this competition. It is a special competition in all the competitions I attended so far that need competitors to gather all domains of image datasets to get the best performing model possible.</p>
<p>Thanks to <a href="https://www.kaggle.com/dschettler8845" target="_blank">@dschettler8845</a> for initiating the external data thread and providing a list of links to help data, which saved us a lot of effort finding the data ourselves.</p>
<h2>Brief Solution Overview</h2>
<h3>Data:</h3>
<p>Datasets</p>
<ul>
<li>apparel: h&m</li>
<li>landmark: google landmark competition</li>
<li>package goods: product 10k, rp2k</li>
<li>furniture: amazon product dataset</li>
<li>storefronts: storefront-146 </li>
<li>artwork: met</li>
<li>dishes: ifood</li>
</ul>
<p>Datasets sampling:</p>
<ul>
<li>Assign # of instances for each domain proportional to the ratio mentioned in this <a href="https://ai.googleblog.com/2022/08/introducing-google-universal-image.html" target="_blank">post</a></li>
<li>Use different ids in the train and validation set, since we want to generate our models to unseen instances.</li>
</ul>
<h3>Modeling</h3>
<ul>
<li>backbone: clip 224/336 L + linear neck</li>
<li>loss: ArcFace (scale = 10, margin = 0.3)</li>
<li>training:<ul>
<li>phase 1: freeze backbone + CosineAnnealing with warmup 0.1, lr=0.001, 100 epochs</li>
<li>phase 2: unfreeze backbone + CosineAnnealing with warmup 0.1, lr=1e-5, 20 epochs</li></ul></li>
<li>sampling strategy: id uniform sampling</li>
<li>validation strategy: <ul>
<li>use different instances different from the train set</li>
<li>pick instance with > 1 images as query images, keep the rest as indexing images</li></ul></li>
</ul>
<h3>Results</h3>
<ul>
<li>clip 224 L: cv 0.7949, public lb: 0.580, private lb: 0.598</li>
<li>clip 336 L: cv 0.8074, public lb: 0.597, private lb: 0.614</li>
</ul>
<h3>What doesn't work</h3>
<ul>
<li>model fusion: <ul>
<li>train swin large 384 + clip together</li>
<li>train swin large 384 + clip separately, combine both backbone and train a new projection head</li></ul></li>
<li>image preprocessing\croppring with saliency segmentation</li>
</ul>
<h2>Lessons Learned</h2>
<p>After seeing the amazing solution from the 6th place <a href="https://www.kaggle.com/competitions/google-universal-image-embedding/discussion/359161" target="_blank">post</a>, we realized what we could have done better to deliver our final solution:</p>
<ol>
<li><strong>We should have tried different backbones</strong>: OpenCLIP ViT-H/14 224x224</li>
<li><strong>We should try to overfit our models first</strong>. We used smaller lr and much fewer epochs to train the projection head while keeping the backbone frozen. Even though it is important to keep iteration fast, it is important not to be underfitting in our final solution!</li>
</ol> |
Google Universal Image Embedding | Silver medal solution and approaches we tried on the way | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Universal Image Embedding
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, we are very happy to be a part of such unordinary and interesting competition, many thanks to the organizers and <a href="https://www.kaggle.com/andrefaraujo" target="_blank">@andrefaraujo</a>, we learnt a lot and did a wonderful work, and many thanks to my great team QuData <a href="https://www.kaggle.com/synset" target="_blank">@synset</a>, <a href="https://www.kaggle.com/kerrit" target="_blank">@kerrit</a> and <a href="https://www.kaggle.com/vaslav" target="_blank">@vaslav</a>, you are awesome!</p>
<h2>Pretrained Models and AvgPool</h2>
<p>From our first encounter with the competition, we decided to try different models for image encoding: SwinTransformer, ConvNeXt, Dino VisionTransformer and EfficientNet. However, none of them gave a boost compared to CLIP from OpenAI (ViT_L_14_336px). Based on this model, with the compression of the CLIP output vector (768 elements in this model, later in the retrained - 1024) using AvgPool, we got the first starting point in this competition with a metric of 0.447. It is important to clarify that all the time we used the last CLIP layer, and experiments on the penultimate layer did not improve the result.</p>
<p>Experiments with different preprocessing variations (such as the original CLIP preprocessing) did not increase the metric either. Similarly, experiments with standard PCA and AutoEncoder instead of AvgPool gave negative results. Although the result was better locally, but after model submission, the metric turned out to be significantly lower.</p>
<p>Curiously, a noticeable boost at this stage was achieved by analyzing and optimizing the CLIP output vector. It turned out that in the feature space after encoders (both text and image) there are strongly correlated outliers on pairs of some vector components. This fact does not depend on the dataset and the model (although the outliers, of course, are in different places in different models). So, below is a plot of the typical means of each component of the feature vector (in red) and their standard deviations (in green).</p>
<p><code>mean, std = vectors.mean(axis=0), vectors.std(axis=0)</code><br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1667828%2F2e68a9bf9c8646677b7465a02fa9f965%2FStd.png?generation=1665757712158240&alt=media" alt=""></p>
<p>Removing these components allowed us to win in terms of quality, which is how we managed to achieve an increase of +0.020 points, and compressing these components using Whitening PCA with training based on Caltech + ImageNet1k + self-assembled Apparel datasets gave an increase of +0.038. A similar effect was given by the normalization of vectors:<br>
vectors = (vectors - mean) / std</p>
<h2>Text-based PCA</h2>
<p>At the beginning of the competition, there were quite strict requirements for the origin of datasets. Since the CLIP model was trained on the cosine proximity between the text and image encoder vectors, we decided to try using a set of text descriptions instead of image datasets. With the help of a text generator based on templates, about 200 thousand balanced texts on the topics of the competition domains were obtained. The vectors of these texts (after the text encoder with subsequent normalization) made it possible to build a fairly good PCA model for this stage:</p>
<p>CLIP Backbone Text (output 768) => Whitening PCA (output 128) => AvgPool (output 64).<br>
This approach achieved 0.534 points, and we did not return to it again. Note that the use of a PCA-based projector for various image datasets usually did not lead to satisfactory results.</p>
<h2>Training a Projector for a classification</h2>
<p>After the release of <a href="https://www.kaggle.com/motono0223" target="_blank">@motono0223</a>'s notebook (special thanks for that), we decided to train the projector as part of a classification task with contrast learning based on ArcMargin. Experiments were also carried out with other methods of contrast learning, but for this problem ArcMargin with parameters m = 0.33 and s = 22 turned out to be better.</p>
<p>Due to the lack of powerful hardware, we immediately discarded the idea of retraining the backbone CLIP. To conduct experiments with hyperparameters and datasets, we decided to get feature vectors once by passing dataset images through the image encoder CLIP. Then these vectors were stored in separate datasets and loaded immediately before training the projector. This made it possible to significantly speed up the learning process (several seconds per epoch on a standard colab GPU machine).</p>
<p>To eliminate overfitting, we used feature space augmentation rather than image space. To do this, a random bias with a normal distribution was added to the feature vector (its variance was a rather important hyperparameter). Also, a dropout layer in front of the projector turned out to be a good way to augment in the feature space.</p>
<p>As a projector, a linear layer was usually used, followed by normalization of the vector. In this way, we have achieved an improvement in the metric to about 0.555. Note that the use of a nonlinear projector always led to a deterioration in the metric (although the metric improved significantly on validation data in local datasets).</p>
<h2>Experimenting with Datasets</h2>
<p>It seemed to us that it was important to have datasets reflecting the composition of the test dataset by domain. Therefore, at first we manually compiled the following datasets:</p>
<ul>
<li>Packaged Goods (1000 classes),</li>
<li>Landmarks (210 classes),</li>
<li>Dishes (67 classes),</li>
<li>Furniture (60 classes).</li>
</ul>
<p>Although rather unbalanced, this selection of datasets was enough to reach approximately 0.600 points.<br>
Next, we decided to expand the list of datasets and select the correct options and formats for the datasets themselves. </p>
<p>According to our observations, in this competition, not class-type datasets, but instance-type datasets work better - these are those where one class represents one specific entity - Landmarks is an ideal example, and Dishes is a good example of a class-based dataset.<br>
As a result, we came to the following list of datasets:</p>
<h3>Apparel & accessories:</h3>
<ul>
<li><a href="https://www.kaggle.com/datasets/andreybeyn/qudata-gembed-hm-products-apparel-acc-187" target="_blank">https://www.kaggle.com/datasets/andreybeyn/qudata-gembed-hm-products-apparel-acc-187</a></li>
<li><a href="https://mmlab.ie.cuhk.edu.hk/projects/DeepFashion.html" target="_blank">https://mmlab.ie.cuhk.edu.hk/projects/DeepFashion.html</a></li>
<li><a href="https://github.com/switchablenorms/DeepFashion2" target="_blank">https://github.com/switchablenorms/DeepFashion2</a></li>
<li><a href="https://www.kaggle.com/competitions/h-and-m-personalized-fashion-recommendations/data" target="_blank">https://www.kaggle.com/competitions/h-and-m-personalized-fashion-recommendations/data</a></li>
<li><a href="https://www.kaggle.com/datasets/mayukh18/fashion200k-dataset" target="_blank">https://www.kaggle.com/datasets/mayukh18/fashion200k-dataset</a></li>
</ul>
<h3>Packaged goods:</h3>
<ul>
<li><a href="https://www.kaggle.com/competitions/products-10k" target="_blank">https://www.kaggle.com/competitions/products-10k</a></li>
<li><a href="https://www.pinlandata.com/rp2k_dataset/" target="_blank">https://www.pinlandata.com/rp2k_dataset/</a></li>
</ul>
<h3>Landmarks:</h3>
<ul>
<li><a href="https://www.kaggle.com/datasets/andreybeyn/qudata-gembed-landmarks-210" target="_blank">https://www.kaggle.com/datasets/andreybeyn/qudata-gembed-landmarks-210</a></li>
<li><a href="https://www.kaggle.com/datasets/kerrit/landmarks-416" target="_blank">https://www.kaggle.com/datasets/kerrit/landmarks-416</a></li>
</ul>
<h3>Furniture & decor:</h3>
<ul>
<li><a href="https://www.kaggle.com/datasets/kerrit/furniture-101" target="_blank">https://www.kaggle.com/datasets/kerrit/furniture-101</a></li>
<li><a href="https://www.kaggle.com/datasets/andreybeyn/qudata-gembed-furniture-180/" target="_blank">https://www.kaggle.com/datasets/andreybeyn/qudata-gembed-furniture-180/</a></li>
<li><a href="https://www.kaggle.com/datasets/promptcloud/amazon-home-furnishings-dataset" target="_blank">https://www.kaggle.com/datasets/promptcloud/amazon-home-furnishings-dataset</a></li>
</ul>
<h3>Storefronts:</h3>
<ul>
<li><a href="https://www.kaggle.com/datasets/kerrit/storefront-146" target="_blank">https://www.kaggle.com/datasets/kerrit/storefront-146</a></li>
</ul>
<h3>Dishes:</h3>
<ul>
<li><a href="https://www.kaggle.com/datasets/kerrit/dishes-67" target="_blank">https://www.kaggle.com/datasets/kerrit/dishes-67</a></li>
</ul>
<h3>Artwork:</h3>
<ul>
<li><a href="http://cmp.felk.cvut.cz/met/" target="_blank">http://cmp.felk.cvut.cz/met/</a></li>
<li><a href="https://www.kaggle.com/datasets/kerrit/artwork-51" target="_blank">https://www.kaggle.com/datasets/kerrit/artwork-51</a></li>
</ul>
<h3>Toys:</h3>
<ul>
<li><a href="https://www.kaggle.com/datasets/andreybeyn/qudata-gembed-toys-46" target="_blank">https://www.kaggle.com/datasets/andreybeyn/qudata-gembed-toys-46</a></li>
</ul>
<p>After assembling the datasets, we realized that the datasets themselves would be difficult to balance in accordance with the Google domain distribution schedule, so we decided to achieve this balancing by another method - by assigning to each of the datasets its own weight in the model pipeline.</p>
<h2>Change to CLIP model ViT-H/14</h2>
<p>About 2-3 weeks before the end of the competition, we decided to change the CLIP model to a slightly different version: CLIP ViT-H-14 laion2b_s32b_b79k. We quickly switched to this model by re-vectorizing all datasets. This pre-trained version of CLIP with the correct list of datasets: Apparel (187), Packaged Goods (3k), RP (2k), Storefront (146), Landmarks (1k), Furniture (135), Food (101), Artwork (51) allowed us to temporarily take 4th place, reaching a score of 0.646 points without additional training backbone CLIP!</p>
<p>Unfortunately, since we did not have the opportunity to use our datasets and approach along with the backbone CLIP retraining, and as a result we were quickly overtaken, but this competition was interesting, we learned a lot and gained valuable experience in such an interesting task as Image Embeddings. Thanks to all!</p> |
Google Universal Image Embedding | Silver medal solution: ViT-L/14 with test time augmentations | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Universal Image Embedding
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you to the host, organizers and all participants!</p>
<p>You can check out my solution here: <a href="https://www.kaggle.com/code/slavabarkov/google-universal-image-embedding-submission" target="_blank">https://www.kaggle.com/code/slavabarkov/google-universal-image-embedding-submission</a></p>
<p>Brief summary of my solution:</p>
<ul>
<li>ViT-L/14 encoder from the OpenCLIP implementation pre-trained on LAION-400M dataset, achieves 0.519 public/0.537 private if the input image is resized while maintaining the aspect ratio</li>
<li>Projection layer trained with ArcFace on the Products-10k dataset based on <a href="https://www.kaggle.com/motono0223" target="_blank">@motono0223</a> work, achieves 0.598 public/0.608 private</li>
<li>Multiple test time augmentations, achieves 0.628 public/0.631 private: <ul>
<li>Predefined crops - center crop and two crops with preselected offset</li>
<li>Pixel dropout - drops pixels from image according to predefined mask</li></ul></li>
</ul> |
Google Universal Image Embedding | [2nd place] Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Universal Image Embedding
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Yes, all components are unfrozen when using stratified lr for training. And it’s efficient, both backbone and non-backbone parts are conversing in a appropriate learning rate.</p>
<p>Firstly, many thanks to <a href="https://www.kaggle.com/andrefaraujo" target="_blank">@andrefaraujo</a> for hosting such an interesting competition. And I'm glad that I moved from 3rd to 2nd place final in this competition.( sixsixsix scored higher than me actually!). I will introduce my approach in shot.</p>
<p><a href="https://www.kaggle.com/code/w3579628328/2nd-place-solution?scriptVersionId=107642661" target="_blank">Inference Notebook</a></p>
<h4>Dataset</h4>
<ul>
<li>Aliproducts</li>
<li>Art_MET</li>
<li>DeepFashion(Consumer-to-shop)</li>
<li>DeepFashion2(hard-triplets)</li>
<li>Fashion200K</li>
<li>ICCV 2021 LargeFineFoodAI</li>
<li>Food Recognition 2022</li>
<li>JD_Products_10K</li>
<li>Landmark2021</li>
<li>Grocery Store</li>
<li>rp2k</li>
<li>Shopee</li>
<li>Stanford_Cars</li>
<li>Stanford_Products</li>
</ul>
<p>I've used a large amount of data, but didn't do much selection.</p>
<h4>Model</h4>
<ul>
<li>open_clip_torch: Vit-H14-224-visual as backbone</li>
<li>fc with dropout rate=0.2 as neck</li>
<li>arcface head.</li>
<li>Freeze backbone to finetune get me 0.662+-, flip boost it to 0.666.</li>
</ul>
<h4>Idea</h4>
<ul>
<li>Data Sampling: Set maxnum_images_perclass=100, minnum_images_perclass=3.</li>
<li>Data Resampling: Do lightly resampling to balance class weight, set a minnum_images_perclass=20.</li>
<li>Dynamic Margin (0.666->0.670): From the 1st solution of shopee competition. Set init m=0.1, then m= m+0.1 *(cur_ep-1), max_m = 0.8.</li>
<li>stratified lr(0.670->0.713): This is a huge leap in my experiments. To remain the representation ability of the pretrained model, I set the init_lr to 1e-4, which is the lr for the none-backbone parts, and the lr for the backbone is <strong>init_lr*lr_factor</strong>(lr_factor is reasonably low), the experiments was still on the half way when competition ended. Anyway, <strong>lr_factor=0.001</strong> is kind of a nice one, which gives me a score of 0.713 finally.(finetune with lr_factor=0.001 for 2epochs, then freeze backbone to finetune for another 1 epoch, do flipping finally.)</li>
</ul>
<h4>Conclusion</h4>
<p>To do experiments more efficiently, I just sampled about 1M data for training in the early stage. during the journey to 0.670, I’ve tried many different architectures, more complex neck, sub-center arcface head(works well in open/clip_L_336), backbone ensemble, etc., but the scores just moved slowly. So I thought 0.670 is kind of a bottleneck for me, freezing backbone, keeping my eyes on the none-backbone parts. Then I decided to turn to the backbone after having done every experiments I thought I should do, even I once failed(I tried unfreezing backbone and finetune simply, the score just drop down rapidly). After thinking kind of deeply of how the backbone works, and also with luck, I succeeded in my first experiment, got a score of 0.688. I've just tried 3 different lr_factor due to time limitation, it took me about 12h+ to train one epoch. Performance: 0.001(0.707)>0.005(0.700)>0.01.</p>
<p>Here I didn't do much explanation of why I did so, and there are some other interesting experiments I've done, maybe will even get a higher score. I'll include all these in paper, you are welcome to check out the fully solution if you are interested!🤗</p>
<p>It's my first time to step into finegrained Image world, which is really interesting! I've learned a lot from this competition, and also have some further understanding on this kind of task.<br>
Finally, thanks to everyone that contribute to this competition!</p>
<p>Code:<br>
<a href="https://github.com/XL-H/GUIE-2nd-Place-Solution" target="_blank">https://github.com/XL-H/GUIE-2nd-Place-Solution</a></p>
<p>Paper:<br>
<a href="https://arxiv.org/abs/2210.08735" target="_blank">https://arxiv.org/abs/2210.08735</a></p>
<p>To achieve 0.707, only 2 epochs are needed, it's efficient. It's also based on your data scale, after preprocessing, I have about 7M data.</p>
<p>6*A40(Your message must have at least 10 characters.)</p>
<p>I didn’t do so. I was busy testing lr_factor, but didn’t find the optimal one still😅. This is an nice idea to avoid overfitting, worth trying!</p> |
Grasp-and-Lift EEG Detection | Cat & Dog solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Grasp-and-Lift EEG Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi Klaus,
Your feedback is appreciated :)
We spend a good amount of time to document the code and the solution, but considering the high number of model we developed, it wasn't possible to detail each of them. If you have any question, or parts of the code you think are difficult to understand, feel free to ask.</p>
<p>Best, Alex</p>
<p>Hi everyone, </p>
<p>You will find our code and documentation for our solution on <a href="https://github.com/alexandrebarachant/Grasp-and-lift-EEG-challenge">this repo</a></p>
<p>The key of this challenge was ensembling. We ended up with a 3-levels pipeline :</p>
<ul>
<li>Level1 models are trained independently on each subjects, with features based on low-pass filtered signal and/or covariance matrices estimated in different frequency band. we have around 50 lvl1 models, with different combination of features and classifiers. Our best lvl1 model score around 0.95 LB and is an adaptation of Tim's CNN script. We had a couple of other models scoring around 0.94, and most of the other are in the 0.90-0.91 range.</li>
<li>Level2 models are trained globally (all subject put together) on predictions from lvl1 models. They are mainly based on XGB and Recurrent Neural Networks. We have around 30 of these models, and the highest scoring model is a RNN with 8s of history on the prediction and bagged with subset of lvl1 model. it score around 0.98.</li>
<li>Level3 models are just different weighted mean of lvl2 model where the weights are optimized to maximize AUC.</li>
</ul>
<p>With such a high level of ensembling, a main concern was overfitting. Our two final pick were 2 submissions we called "YOLO" and "Safe1". The YOLO submission was our submission with the highest CV and scored 0.98128 / 0.98109. The Safe1 submission was optimized to reduce variance on the CV and was composed of a subset of lvl2 models hand-picked for their stability. This submission scored 0.98108 / 0.98095</p>
<p>The code is quite computationaly expensive, and it will take you up to one week to generate the final results.
Our solution is well applicable in real world and works in real time, provided that the patient with prosthesis is constantly connected to 15 Amazon high-end instances via Google Fiber.</p>
<p>All jokes apart, the final solution is a bit overkill. Judging by a visual inspection of lvl2 predictions, all of the events should be correctly detected, and the lvl3 ensembling only help to increase the sharpness of the predictions (reduce error on the edges of the 300ms frames)</p>
<p>We did not focus too much on producing very high scoring lvl1 models. Instead, we tried to produce a maximum of diversity in the lvl1 predictions. We stopped working on lvl1 models a couple of weeks before the deadline to focus entirely on lvl2 models and later on lvl3 models.</p>
<p>RNNs turns out to be very effective on lvl2 ensembling, they were able to catch sequential structure of the events. However, they were not very successful on lvl1 features.</p>
<p>Finally, we also tried semi-supervised adaptation without any success. Seems that the signals were stable enough across series to not have to rely on such procedure.</p>
<p>Best,</p>
<p>Cat & Dog</p> |
Grasp-and-Lift EEG Detection | Team HEDJ solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Grasp-and-Lift EEG Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>We've posted our code to github <a href="https://github.com/bitsofbits/kaggle_grasp_and_lift_eeg_detection">here</a>. </p>
<p>Our approach was simple: we developed a neural net that scored ~0.971 on the leaderboard (I discussed the structure of this net previously in this <a href="https://www.kaggle.com/c/grasp-and-lift-eeg-detection/forums/t/16288/structure-of-our-best-net-hedj">post</a> ), generated a bunch of variations on this net and then just averaged the results together. We weighted nets that we knew or strongly suspected would score well (>0.97) on the public leaderboard higher and we discarded poorly performing (0.963) nets, but other than that we didn't really have a second level model.
The highest scoring single net that made it into our ensemble score 0.97147 on the private leaderboard. However, we didn't submit this net individually till after the close of the competition so, we didn't know that until recently.</p>
<p>We learned several things when we went to clean up, write up and recreate our scores. Two important ones were, first that we had undertrained many of our nets. Training them for longer times generally resulted in better scores, this was particularly true when paired with the improved validation setup we put together near the end of the competition. Second, we didn't keep track of the exact parameters we used when training some of the nets. As a result, when we went to recreate everything we couldn't recreate the results of those early nets exactly and had to content ourselves with a score slightly above the score we submitted for third place. </p>
<p>A total of 34 nets made it into the final ensemble, and training all of these nets and generating a solution takes several days of runtime on machine with two high powered GPUs. Using a large ensemble of nets is probably impractical, but a single net or even a few nets averaged together good probably make predictions fairly efficiently. This is because the output top several levels of the net, where most of the computation takes place, could update their outputs incrementally rather than recomputing them from scratch each time. The bookkeeping involved in implementing this would likely be something of headache but it seems possible.</p>
<p>If you have any trouble recreating any of the nets, please let us know and as soon as @deep is done with his Thesis/Paper marathon and I'm done moving, we'll get right back to:-) (That is probably Tuesday or Wednesday).</p>
<p>Thanks for the opportunity to compete in the competition with this great dataset. It was a lot of fun.</p> |
Grasp-and-Lift EEG Detection | Team daheimao solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Grasp-and-Lift EEG Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Code and documentation can be found <a href="https://github.com/stupiding/kaggle_EEG">here</a>.</p>
<p>I am a Ph.D student majored in neural networks and computer vision. This is my first Kaggle competition. My intention of participating in this competition is to evaluate the performance of recurrent convolutional neural network (RCNN) in processing time series data. RCNN is firstly proposed by I for image classification (<a href="http://www.xlhu.cn/papers/Liang15-cvpr.pdf">CVPR 2015</a>), and then used for scene labeling (will appear in <a href="https://nips.cc/Conferences/2015/AcceptedPapers">NIPS 2015</a> soon). In these two tasks input data is static image, and RCNN performs well. It turns out that RCNN also performs well for EEG data. Without any domain knowledge related modification, our best single model achieves 0.97652/0.97661 public/private LB scores. </p>
<p>The pipeline of our solution is simple:</p>
<p>1.A set of candidate models (mostly RCNN) are trained in a 4-fold cross validation (CV) manner over the training set.</p>
<p>2.Greedy forward selection (GFS) is applied to the candidate model set, so that some models are selected for combination and submission.</p>
<p>The selected models are re-train over all the training data, and their predictions over the test data are averaged and submitted.</p>
<p>The pre-processing is very simple, just removing the mean of each input sample, which is the common practice in image classification. The model is neither subject-specific nor event-specific. A model receives the data of any subject as input, and outputs six predictions.</p>
<p>Most of our models are RCNN, whose structure is described in the <a href="https://github.com/stupiding/kaggle_EEG/blob/master/README.md">documentation</a> and the papers. I also tried CNN, and the best CNN model achieves 0.97136/0.97297 public/priviate LB scores.</p>
<p>GFS seems to work well and its results are consistent with the LB scores. However, I made some mistakes when there were 3 days before the deadline. And our ensemble almost stops improving since then.</p>
<p>To sum up, my first kaggle competition is very exciting. I want to give thanks to the organizers and all the other teams!</p>
<p>@Deep, thanks. Yes, I am interested in writing a paper on electrophysiological signal processing. I am not familiar with this field, so it will be great if you can find some other datasets :)</p>
<p>@Alex, thanks. I agree with you that visualizing the learned filters is interesting. Hopefully some useful conclusions may be drawn.</p>
<p>Hi @rmldj, the model is len3584_resize3_bs_c1r4p5_f9n256r35p1</p> |
Greek Media Monitoring Multilabel Classification (WISE 2014) | Code sharing | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Greek Media Monitoring Multilabel Classification (WISE 2014)
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><strong>Description</strong> of my methodology and insights:</p>
<p>http://alexanderdyakonov.narod.ru/wise2014-kaggle-Dyakonov.pdf (sorry for my English)</p>
<p><strong>My code</strong> (python 2.7):</p>
<p>http://alexanderdyakonov.narod.ru/greek_dyakonov_finalok.py (it is my first python-code)</p>
<p><strong>Interview </strong>(no free hunch):</p>
<p>http://blog.kaggle.com/2014/07/28/11th-hour-win-of-greek-media-monitoring-challenge/</p>
<p>Thanks to the organisers and congrats again to all the top finishers!</p> |
Greek Media Monitoring Multilabel Classification (WISE 2014) | Our Approach (5th place) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Greek Media Monitoring Multilabel Classification (WISE 2014)
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Great competition with a thrilling finish! That was quite some shake-up for the top 5 positions, I guess we (that made many submissions) ended up overfitting the leaderboard al ittle bit. Big credit to Alexander and antip for the impressive late finish, proving themselves leaders in this field for one more time (as in wikipedia) . I personally (since I come from Greece and have studied in Thessaloniki ) really wanted to take that first place . Maybe next time! </p>
<p>We made separate models for each one of the labels.</p>
<p>In trems of software, we used the Java implementation of Liblinear , but we changed it a little bit to make it accept our sparse matrices for faster performance and made seperate models for L2, L1 regressions and L2, L1 linear SVMs all with hard regularizations (less than one). In fact, the L1 svc with some scalling and c=0.19 of the initial set, <strong>would finish top 10 on its own</strong> (you may want to consider it as it is very implementable)!0.77554</p>
<p>In terms of how we assigned the labels, we always took the highest score for each observation and assigned its corresponding label, plus any other that was above a threshold X (subject to cross-validation).</p>
<p>We also used neural networks from Encog on 2000 svd components created from the initial data and they were quite useful as they optimize for a multi-label error (e.g multi-objective not each label independently) .</p>
<p>I also codded up from scratch random forests in Java for multi label problems that accept sparse input and I managed to finish it time to run it, but it was really slow so I had to assign sub-optimal parameters and wait for 1 week to finish! they scored around 0.60 but they did contribute in the final blend.</p>
<p>For the blend, I used Scikits ExtraTrees and Random Forests to get my final score. It is a shame that my selection of final submissions was poor as well :( Congrats to the winners (really happy for you).</p>
<p>[quote=Julian de Wit;50673]</p>
<p>My goal of the competition was to learn proper blending.</p>
<p>[/quote]</p>
<p>Well, you did more than that, you got your master status as well, congrats!</p> |
Grupo Bimbo Inventory Demand | #1 Place Solution of The Slippery Appraisals team | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Grupo Bimbo Inventory Demand
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><strong>Summary</strong></p>
<p>Overall solution was 2nd-level ensembling. We built a lot of models on 1st level (using 9 week as validation set) The training method of most of 1st level models was XGBoost. For second level we used ExtraTrees classifier and linear model from Python scikit-learn. The final result is weighted average of these two models.
The most important features are based on the 1-3 weeks lags of target variable grouped by factors and their combinations, aggregated features (min, max, mean, sum) of target variable grouped by factors and their combinations, frequency features of factors variables.
For our work, we used R environment with Rstudio IDE, and Python with Jupyter. We combined our work on our personal computers with cloud computing on x1.32xlarge Amazon EC2 instance with web interface for access. Last week of the competition was the continuous calculation of single and 2nd level models on x1.32xlarge Amazon EC2 instance.</p>
<p>It was a very interesting competition, our approaches were inspired by many discussions on the competition forum and by many kernels published by other participants.</p>
<p><strong>Features Selection / Engineering</strong></p>
<p>One of the main ideas in our approach is that it is important to know what were the previous weeks sales. If previous week too many products were supplied and they were not sold, next week this product amount, supplied to the same store, will be decreased. So, it is very important to include lagged values of target variable Demanda_uni_equil as a feature to predict next sales.
The simplified version of the R script <a href="https://www.kaggle.com/bpavlyshenko/grupo-bimbo-inventory-demand/bimbo-xgboost-r-script-lb-0-457/discussion">“Bimbo XGBoost R script LB:0.457”</a> for such an approach was published on the Kaggle forum of this competition.
We merged transactions data frame with data frames from files cliente_tabla.csv, producto_tabla.csv using appropriate key fields. We used frequencies features grouped by categorical variables (e.g. Producto_ID, Cliente_ID, Agencia_ID, etc. ) and by different combinations of them. Data frame with new product features received by text parsing of product name variable NombreProducto from file producto_tabla.csv was merged to main data frame via key Producto_ID. As a result of parsing of product names we got a lot of new product features which denote e.g. short names of product, weight, brand, etc. We grouped the products by clusters and used the number of clusters as a new feature.
We chose the data of previous weeks in respect of training sets weeks for aggregations of target variable Demanda_uni_equil, grouped by factors variables. We calculated mean, min, max, sum of target variable grouped by factors and their combinations. Then these aggregated values was merged into main data sets as the new features using the list of these factors as keys for merging. We also calculated the similar aggregated features grouped by factors for variables Venta_hoy, Venta_uni_hoy, Dev_uni_proxima, Dev_uni_proxima.
We chose the set of 8-9 weeks for the validation and 6-7 weeks’ data were considered as a training set for validation. It was necessary to build 2-level models and find appropriate weights for models blending. 8-9th weeks were chosen as a training set for prediction sales of the 10-11th weeks. So, we generated two types of training the data set: 6-7th weeks training data set for the prediction of 8-9th weeks sales and 8-9th weeks for the prediction of 10-11th weeks sales. Since we used the 1st week lag for the target variable as a feature, we needed to calculate this feature additionally for predicting 11th week sales. First we predicted the10th week, then using the predicted target variable for the 10th week, we calculated lagged features with the lag for 1 week and used these calculated features for the prediction of 11th week sales. As the first step for validation, we calculated the prediction for the 8th week, then using predicted target values we calculated 1 week lag for 9th week data set, and then we made the prediction for 9th week sales. It was done to build the validation model similar to the prediction of 10-11 weeks sales. For lag values, we investigated two types of cases with lags for 1-3 weeks and lags for 2-3 weeks. First we used up to 5 weeks lags, on the next study we used maximum 3 weeks lags. In the case of 1-3 weeks lags, for the validation on 8-9th weeks sales, we made the prediction in 2 steps. For building a two-level model, we used only the validation of 9th week sales. Our study shows that using 1-3 weeks lags gives us better scores comparing with 2-3 weeks lags. For the features with 1 week lags we used only the mean value of target variable grouped by the list of factor variables. For 2-3 weeks lags, we also used averaged lags for such features as Dev_proxima, Dev_uni_proxima, Venta_hoy grouped by the list of different factors. Most of the lagged features which were used for classification have 2 weeks lag. We generated more than 300 features. All received results for the validation on 9th week sales and for the prediction of 10-11th weeks, were used for building two-level models. Our study also shows that using only 7th week data for the validation on 8-9th weeks and 9th week for the prediction of sales on 10-11th weeks gives better scores comparing with two weeks training sets, so we built our next single models based on the sets of 7th week for validation and on the sets of 9th week for the prediction. To speed up our calculation we selected 175 top features using function xgb.importance() of ‘xgboost’ R package and then we worked with this set of features. We also calculated the predictions for Venta_hoy", "Venta_uni_hoy", "Dev_uni_proxima", "Dev_proxima" variables which were used on the 2nd level model. On the first level each of us used his own classifier options and features sets which gave results with good scores for single models without high correlation that is important for creating 2 level models. </p>
<p><strong>Training Methods</strong></p>
<p>For creating single models of the first level, we used xgboost classifier. The options for XGBoost classifiers can be found in our scripts. On the second level, we used validation results of the first level models as training sets for the prediction of target variable for the test set. We created a linear model and model based on ExtraTreesClassifier. On the third level, the results from the second level models were just weighted average.</p>
<p><strong>References</strong></p>
<p><a href="https://www.kaggle.com/lyytinen/grupo-bimbo-inventory-demand/basic-preprocessing-for-products">https://www.kaggle.com/lyytinen/grupo-bimbo-inventory-demand/basic-preprocessing-for-products</a>
<a href="https://www.kaggle.com/ybabakhin/grupo-bimbo-inventory-demand/products-clustering">https://www.kaggle.com/ybabakhin/grupo-bimbo-inventory-demand/products-clustering</a>
<a href="https://www.kaggle.com/mlandry/grupo-bimbo-inventory-demand/h2o-gbm/run/263502">https://www.kaggle.com/mlandry/grupo-bimbo-inventory-demand/h2o-gbm/run/263502</a></p>
<p>Team Slippery Appraisals
Year 2016</p>
<p>Our teammates used their own codes and features sets to produce first level models, part of the code was written in R, other part - using Python. Then these first level models were used for creating the models on the second level. </p>
<blockquote>
<p>You said that for the 1st level you use XGBoost Classifier. Do you mean XGBoost Regressor?</p>
</blockquote>
<p>For classifier training I used the function xgb.train() of “xgboost” R package.</p> |
Grupo Bimbo Inventory Demand | #19 place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Grupo Bimbo Inventory Demand
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I only trained on the rows from week 9. I used Apache Spark to create 12 features for those rows and XGBoost to train the model</p>
<ol>
<li>Avg demand for this client</li>
<li>Avg demand for thhis product</li>
<li>Weighted average demand for this product, weighted by total demand for each client</li>
<li>Avg demand for this product in this city</li>
<li>Avg demand for this product at this client</li>
<li>Avg demand one weeks ago for this product at this client</li>
<li>Avg demand two weeks ago for this product at this client</li>
<li>Avg demand three weeks ago for this product at this client</li>
<li>Avg demand four weeks ago for this product at this client</li>
<li>Avg demand five weeks ago for this product at this client</li>
<li><p>Predicted demand for this week by linear regression FTRL model using the following features:</p>
<p>For every row in week 9, I calculated the historic average demand (before week 9) that that client
had sold. Then I made tuples of the type (This_product_id,previously_sold_product_id) and hashed them.
The feature value was the average demand of the previously sold product. I also hashed the ID's in the row and gave them the feature value 1.0
The output value was the demand for the given row.</p></li>
<li><p>Predicted demand for this week by linear regression FTRL model using the following features:</p>
<p>For all rows before week 9, I hashed all IDs and gave them the value 1.0.
I also used poly4 interaction of the IDs and gave them the value 1.0
The output value was the demand for the given row.</p></li>
</ol>
<p>[quote=Zach;133122]</p>
<p>How'd you get the 1-week lag for week 10?</p>
<p>[/quote]</p>
<p>I used the same lagged features for week 10 and 11. I treated the entire test set as if it was week 10</p> |
Grupo Bimbo Inventory Demand | #6 Place Solution - Team Gilberto & Regis | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Grupo Bimbo Inventory Demand
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First, congratulations to top 3 teams! It was a very hard problem and that placement is very well deserved.
That was a cool competition, thanks Kaggle and Bimbo for that experience!</p>
<p>Our solution uses week 9 for validation.
We trained more than 10 models, validated in week 9. But the final submission is basically a weighted average of 7 models.</p>
<p>Since Private LB is only in week 11 we don't have lag1 feature values directly available to use in the models (Probably that is the cause of many positions shakes in the end). So we built some models without lag1 features and some models using the predictions of week10 (Public LB) as the lag1 features of week11(Private LB).
Models we trained:</p>
<ul>
<li>XGB( around +100 features: many combinations of lags)</li>
<li>LR</li>
<li>libFM(Sparse Features only)</li>
</ul>
<p>Our best single XGB scored around 0.433x on Public LB ;-D</p>
<p>Giba</p>
<p>Wow... Jumping positions is always good.</p>
<p>@aldente: I didn't submitted my libFM but it scored 0.452 at week9.</p> |
Grupo Bimbo Inventory Demand | A quick summary of Team Mystic (3rd place) solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Grupo Bimbo Inventory Demand
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>[quote=Sangxia;135447]</p>
<p>I tried to combine some of the shallow features you mentioned with libFM using xgboost but my CV scores always got worse than libFM alone. Could you share the parameters of your xgboost model and how important the shallow features are? Thanks!</p>
<p>[/quote]</p>
<p>Were you doing the same validation as we did? If that is the case, it seemed to improve our score with any libFM models and we didn't have to tune xgboost at all to get better scores.</p>
<p>[quote=Mattias Fagerlund;135445]</p>
<p>Hi, my training runs took so very long that once I realized that there were gains to be made to make predictions using previous predictions, I gave up on trying because it would take too much time. It would be interesting to know how effective that actually was.</p>
<p>Re 1) I was using shallow features similar to your implementation, but I didn't use max/min/std dev, that's a nice trick that I'll use in the future.</p>
<p>Re 2) What features did you use for this?</p>
<p>Thanks for sharing!</p>
<p>/m</p>
<p>[/quote]</p>
<p>Just the predicted result from those linear models. They actually didn't take much time to generate but were the most important features.</p>
<p>We use data from week 3 - 6 to generate features for week 7, and week 3 - 7 to generate features for week 8 - 9, and then train on week 7 to predict on week 8, and 9. Similarly, we use data from week 3 - 8 to generate features for week 9, and then train xgboost on week 9 to predict week 10 & 11.</p>
<p>We have two parts of features:</p>
<ol>
<li><p>Shallow features. We calculate max, min, median, mean, and standard deviation of demand on interactions of features. We also include features (Venta_uni_hoy, Venta_hoy, Dev_uni_proxima, Dev_proxima, Demanda_uni_equil) of the last 3 weeks.</p></li>
<li><p>Deep features. We train a dozen of linear models to get "deep" interaction signal between features. Vowpal wabbit, FTRL implemented with Pypy, libFM, and libFFM are all used to generate deep features.</p></li>
</ol>
<p>Combine these two parts, one single xgboost model is good enough to get the score we have in the public LB.</p>
<p>Please comment if you have any questions. And code with detailed doc will be linked here after we get the final approval from kaggle and grupo bimbo. :)</p>
<p>[quote=PabloMarin;135517]</p>
<p>@Little Boat</p>
<p>Please, forgive my "naiveness" but can you please help me understand your point (2), aka, Deep Features?
I would like to understand the theory behind it and how you did it. Because obviously these deep features was the reason of your success.</p>
<p>Thank you in advanced for the teaching ;)</p>
<p>[/quote]</p>
<p>Briefly, you can think of the shallow features as simple descriptive statistics while the deep features are linear interaction between all features. Shallow features describe the rough "shape" of those interactions, for example, for the same pair of product and client, they will all have the same values, while deep features model all interactions and give predictions based on all features. You can think shallow feature as "big picture" while deep features are "detail oriented".</p> |
Grupo Bimbo Inventory Demand | Low Memory Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Grupo Bimbo Inventory Demand
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Denis, thanks.</p>
<p>xgboost has an option to use an approximate algorithm that does what you do more or less. If I remember well it bins large cardinality features using 300 bins. But doing so yourself is definitely more memory efficient.</p>
<p>Congratulation to winners! That was a challenging competition given data set size. Like many, I am waiting for some info on the approach used by top teams.</p>
<p>In the meantime, here is some info on my solution (18th on final LB).</p>
<p>In order to deal with data size I decided to build one model per product id. I knew this would leave money on the table, but it was rather effective with a 16 GB machine (macbook pro). Did anyone else use the same trick?</p>
<p>CV setting is time based. Use week 9 as validation set, and train models on weeks 6-8. Local CV is well correlated with the public LB, with an almost constant offset.</p>
<p>Features are lag, averages on various groupings. For instance, group by agencia and route, then average sales and returns. In some cases the number of non zero values, and the std is alos computed. Then join this with the training data. </p>
<p>More precisely, for the first half of competition, I used previous 4 weeks to build features. I used weeks 3-5 to build features for week 6, weeks 3-6 to build features for week 7, weeks 4-7 to build features for week 8, weeks 5-8 to build features for week 9, and weeks 6-9 to build features for test. Missing features in weeks 6 are treated as NaN. </p>
<p>For CV, models were trained on weeks 6-8 with the lag features described above, and validated on week 9 with its lag features. For test predictions, weeks 7-9 with their lag features were used.
This led me to 7th place at some point. I wish competition ended then ;)</p>
<p>I then switched gear, like many, to focus on 2 weeks lag predictions; CV setting is almost the same, but one week is omitted. For instance weeks 4-6 are used to build features for week 8, week 7 is omitted. The rest is unchanged.</p>
<p>What we lose with decomposition is cross product interactions. Two ways were used to cope with that. </p>
<p>First, features based on groupings across products were capturing some interactions.</p>
<p>Second, in a last pass, all products with poor local CV values were grouped to form a much smaller data set (about 2.5M entries instead of 75M). Then I could train xgboost on it. This was rather effective when predicting one week ahead, but I could not get it to work well when predicting 2 weeks ahead because of lack of time.</p>
<p>I did not use ensembling, also because of lack of time. </p>
<p>Time is the main issue with this approach. Local CV plus submission preparation is taking between 24 and 60 hours as there are almost 1500 models to build. This is the classical time vs memory tradeoff. However, it could scale very well on a cluster as each model can be trained concurrently with others. I will definitely look into a cluster next time!</p>
<p>The biggest surprise to me is that the one week ahead model did as well as a the two week ahead model on private LB . Probably because my last pass was effective with the former.</p>
<p>I did all in Python, with numpy, pandas, scikit-learn and xgboost mainly. </p>
<p>Hope the above is of interest to readers.</p> |
H&M Personalized Fashion Recommendations | 10th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: H&M Personalized Fashion Recommendations
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First, thanks for the great competition to H&M and Kaggle. I enjoyed it a lot, it's a very nice kind of challenge, as it's not heavily compute-gated and allows for a wide range of solutions.</p>
<p>Many Kagglers already shared their solution and since I followed the same framework as most, I will only mention some differences.</p>
<h6>CV-LB correlation</h6>
<ol>
<li>Many people reported a gap between CV and LB or a "slowdown" when their models improved. I never experienced that, a linear fit between my CV and LB gives: f(x) = 0.793x + c with R²=98.96%. </li>
<li>The problem is most likely caused by information leaking from the future. For example when I train a CF-model on all data up to 09-15 and later use this to make predictions for the week 09-01, I will boost my CV by implicitly using purchase information from the future.</li>
<li>I avoided this by training a full set of models for every week. I tagged the models with the last date used to create it. That way it was impossible to leak any information.</li>
<li>While this sounds good, it also was my "downfall". I simply did not have enough compute to do this for many different models and weeks. Which is why my recall methods were much less diverse than those of the top solutions and which is why I only used four weeks of training data in the end.</li>
</ol>
<h6>Retrieval strategy / Features</h6>
<ol>
<li>In addition to what many people used (rebuys, I2I, CF, Top-Sellers etc.), I created a simple sales prediction model. Based on recent sales of an article it predicted next weeks sales.</li>
<li>The main use of this model was helping to detect products that are phasing in or out (since we don't have that data).</li>
<li>Example: an article that is only available for a short time (special collaboration etc.) might have daily sales like: 0 0 28 431 389 105 32 11. Using recent popularity will massively overestimate next weeks sales while the prediction model learned the article is phasing out (out of stock).</li>
<li>It was in the end one of the most used features, probably since it covers popularity and (future) availability in one.</li>
<li>My work horse for similarity was LightFM (BPR and WARP, with and without item features). I used it for recommendation and item-to-item similarity (by comparing feature embeddings).</li>
</ol>
<h6>Hardware / Optimization</h6>
<ol>
<li>Desktop with 12C and 64GB Ram.</li>
<li>I used feature storing liberally.</li>
<li>To decouple training data size from Ram available, I trained my LightGBM model from HDF5.</li>
<li>Final model was an ensemble of four LightGBM models. But the improvement was rather small, ~1% compared to the single best model.</li>
</ol> |
H&M Personalized Fashion Recommendations | 11th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: H&M Personalized Fashion Recommendations
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>As you mentioned, that should be <code>faiss-gpu</code>. Thak you for pointing out!</p>
<p>We didn't use GPU to train the models but used to generate candidates(faiss-gpu, specifically).<br>
We tested using GPU to train CatBoost model, but it yielded a much worse performance.</p>
<p>Thanks Kaggle and H&M for hosting this exciting competition, my teammate <a href="https://www.kaggle.com/niwatori" target="_blank">@niwatori</a> and all participants competing with us.</p>
<p>Here is a brief summary of our solution.</p>
<h2>1. Candidate generation</h2>
<p>As discussed in the forum, we used multiple recall methods to generate the candidates.</p>
<ul>
<li>repurchase</li>
<li>items with the same product code in repurchase</li>
<li>popular items</li>
<li>category-wise popular items<ul>
<li>list the popular items for each department_no</li>
<li>add them if the customer has bought the item with the same department_no</li></ul></li>
<li>item2item collaborative filtering</li>
<li>similarity based on tag ohe vector<ul>
<li>create item ohe vector from the tags (product_type_no, product_group_name, etc…)</li>
<li>create user vector which is the average of the item vector bought by the customer</li>
<li>select top-k items based on the similarity between the user vector and item vector</li></ul></li>
</ul>
<p>We left most of the columns added during the candidate generation stage so that we can utilize them as features.</p>
<h2>2. Feature engineering</h2>
<p>In addition to the columns generated during the above phase, We added features as follows:</p>
<ul>
<li>user static features</li>
<li>item static features</li>
<li>dynamic features (same for both user and item)<ul>
<li>mean and std of price, sales_channel_id</li>
<li>date-diff since the last transaction</li>
<li>volume/rank of volume in the last few weeks</li></ul></li>
<li>user-item features<ul>
<li>date-diff since the day (user, item) pair last occurred</li>
<li>count of (user, item), (user’s age, item) pair occurred in the last few weeks</li></ul></li>
<li>dot product of item ohe vector and user vector</li>
<li>user embedding from light-fm</li>
</ul>
<h2>3. Training</h2>
<p>We used CatBoost ranker (YetiRank). We also tried to use LGBMRanker, but CatBoost gave much better results in our case. We found that the default parameter gave decent results compared tuned one.</p>
<p>We used 6, 8, and 12 weeks for training and left the last week for validation. It took up to 20h with a 224 core machine for training a single model.</p>
<p>Our best single was cv: 0.03589, public: 0.03381, private: 0.03391.</p>
<p>Throughout the competition, our validation/public basically correlated positively, but we observed that changing the training weeks gave a negative correlation. (worse cv score, better public/private score)</p>
<h2>4. Ensembling</h2>
<p>We followed the ensembling method introduced by <a href="https://www.kaggle.com/code/chaudhariharsh/lb-0-0238-h-m-ensembling-how-to-get-bronze" target="_blank">https://www.kaggle.com/code/chaudhariharsh/lb-0-0238-h-m-ensembling-how-to-get-bronze</a></p>
<p>We ensembled 4 models trained with different training weeks and different parameters.</p>
<h2>Notes</h2>
<p><strong>update</strong> Here is our source code: <a href="https://github.com/ryowk/kaggle-h-and-m-personalized-fashion-recommendations" target="_blank">https://github.com/ryowk/kaggle-h-and-m-personalized-fashion-recommendations</a></p>
<p>For a given <code>age</code>, we wanted to get an age range <code>[age-i, age+i]</code> such that the number of the people in the range equals that of between 24 and 26 yo.<br>
<code>age shifts</code> gives the optimal <code>i</code> for each age. </p>
<ol>
<li>To sort by <code>day_rank</code> first and then sort by <code>volume_rank</code> if there is a tie.</li>
<li>We assigned the week number backward, so week(0) corresponds to the last week. So we performed the calculations over the last 12 weeks.</li>
</ol> |
H&M Personalized Fashion Recommendations | 12th place Writeup | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: H&M Personalized Fashion Recommendations
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>You are right. Real code is here:</p>
<pre><code>class AdditiveFeatureEmbedding(torch.nn.Module):
def __init__(
self,
categorical_embeddings: Optional[dict] = None,
numerical_embeddings: Optional[dict] = None,
):
super().__init__()
self.categorical_embeddings = torch.nn.ModuleDict(
categorical_embeddings or {}
)
self.numerical_embeddings = torch.nn.ModuleDict(
numerical_embeddings or {}
)
@property
def columns(self):
return [
*self.categorical_embeddings.keys(),
*self.numerical_embeddings.keys(),
]
def __call__(self, **inputs):
h1 = sum([
f(inputs[k])
for k, f in self.categorical_embeddings.items()
])
h2 = sum([
f(inputs[k])
for k, f in self.numerical_embeddings.items()
])
h = h1 + h2
return h
</code></pre>
<p>And then</p>
<pre><code>AdditiveFeatureEmbedding(
categorical_embeddings={
"article_embed_ids": torch.nn.Embedding(
num_embeddings=NUM_EMBEDDINGS,
embedding_dim=HIDDEN_DIM,
),
...
},
numerical_embeddings={
"efficientnet_pca": torch.nn.LazyLinear(
out_features =HIDDEN_DIM
),
...
}
)
</code></pre>
<p>The binary classifier takes as input the features for the transaction articles that has the top2~5 cosine similarity to the candidate articles.<br>
This cosine similarity is calculated by article embeddings that can be learned by BCELoss, so I understood that this is a kind of metric learning.<br>
(Sorry if my understanding is wrong.)</p>
<p>“Efficientnet image features compressed by PCA" is just computed by timm and cuml.PCA, and then combined with other categorical / numerical features as shown in the figure.<br>
<img src="https://i.ibb.co/KsWSxBc/2022-05-11-15-21-05.png" alt="figure"></p>
<p>First of all, I would like to thank the competition hosts.<br>
Here I will try to summarize some of the main points of our solution.</p>
<p><img src="https://i.ibb.co/vxjNPwv/2022-05-10-21-39-35.png" alt="2022-05-10-21-39-35"></p>
<h3><strong>GBDT part ( <a href="https://www.kaggle.com/onodera" target="_blank">@onodera</a> )</strong></h3>
<p><a href="https://www.kaggle.com/onodera" target="_blank">@onodera</a> tried to solve this recommendation task by splitting into 3 models with GBDT(XGB).</p>
<ol>
<li>Reorder<br>
Predict whether the customer will purchase the article which was purchased by the customer before</li>
<li>new item top1k<br>
Predict whether the customer will purchase the most purchased article(top1k) in last 1 week</li>
<li>Other Color<br>
Predicting whether a customer will purchase a different color of a product they have previously purchased</li>
</ol>
<p>And then got <strong>0.03242</strong> on both LB by blending above outputs.</p>
<h3><strong>NN part ( KF )</strong></h3>
<p>I used E2E NN approach based on the great <a href="https://www.kaggle.com/code/aerdem4/h-m-pure-pytorch-baseline" target="_blank">baseline notebook</a> published by <a href="https://www.kaggle.com/aerdem4" target="_blank">@aerdem4</a>.</p>
<ol>
<li>Learning metrics between candidate articles and transaction articles<ul>
<li>Assign the following knowledge to the article's embedding vectors<ul>
<li>Article meta information ID (e.g. product code)</li>
<li>EfficientNet image features compressed by PCA</li>
<li>Graph embedding using Deep Walk and Word2Vec</li></ul></li>
<li>Training with the following classification losses.</li></ul></li>
<li>Binary classification of whether a candidate article will be purchased or not<ul>
<li>Top 2~5 transaction article features similar to each candidate articles<ul>
<li>Similarity scores computed by metric learning module</li>
<li>Statistical features for time elapsed since last purchase</li></ul></li>
<li>Optimize BCELoss & DiceLoss</li></ul></li>
</ol>
<p>And then got <strong>0.298~0.303</strong> on both LB for each single model.</p>
<h3><strong>NN Stacking part ( <a href="https://www.kaggle.com/shimacos" target="_blank">@shimacos</a> )</strong></h3>
<p><a href="https://www.kaggle.com/shimacos" target="_blank">@shimacos</a> built 2 Stacking model from KF's many NN models with LightGBM.<br>
By including these models, the result of weighted blend improved from 0.03377 to 0.03394.</p>
<h3><strong>Weighted blending ( KF )</strong></h3>
<p>I blended GBDT, NN, and stacked NN models using Optuna to optimize map12 directly.<br>
And then got CV: <strong>0.04047</strong>, Public: <strong>0.03389</strong>, and Private: <strong>0.03394</strong>.</p> |
H&M Personalized Fashion Recommendations | 13th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: H&M Personalized Fashion Recommendations
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you <a href="https://www.kaggle.com/muhammedtausif" target="_blank">@muhammedtausif</a> </p>
<p>Thank you!<br>
The library we used is called <a href="https://radimrehurek.com/gensim/" target="_blank">gensim</a></p>
<p>Before going on to our solution, I would like to thank Kaggle staff and H&M team, alongside with all competitors on the leaderboard for competing with us in this competition. And of course, special thanks to <a href="https://www.kaggle.com/A.Sato" target="_blank">@A.Sato</a>, <a href="https://www.kaggle.com/taksai" target="_blank">@taksai</a>, and <a href="https://www.kaggle.com/tomo20180402" target="_blank">@tomo20180402</a> who are my teammates, for their amazing effort. We are honored to win the gold medal!</p>
<h1>Overall Strategy</h1>
<p>The overall strategy of our solution is based on four steps.</p>
<ol>
<li>Candidate Selection</li>
<li>Feature Engineering</li>
<li>Ranking</li>
<li>Ensembling</li>
</ol>
<h1>Key Points</h1>
<p>We trained three LightGBM ranker models with different candidate selection methodologies, and finally used an ensemble technique to boost our score.<br>
The key idea of our solution was utilizing item2vec embeddings, which is an algorithm inspired by word2vec.<br>
Article embeddings were retrieved using item2vec embeddings as they are, and customer embeddings were created by taking an average among all dimensions of all past purchased articles for each customer.</p>
<h1>Step1: Candidate(Article) Selection</h1>
<p>In the candidate selection phase, we tried to sample positive and negative samples for each customer. The candidate selection methods were different for each of the three models, and the diversity of candidates between our models leveraged our scores after ensembling.</p>
<h2>Model1</h2>
<p>The ensemble method for Model1 was a concatenation of 4 strategies. Past 14 weeks were used to train the model.</p>
<ol>
<li>recent 50 purchased articles for each customer</li>
<li>top 20 products with the same product code of each article in 1.</li>
<li>last week top 100 articles of all customers for each sales_channel_id group</li>
<li>last week top 30 of all customers for each [sales_channel_id, index_group_no] group</li>
</ol>
<h2>Model2</h2>
<p>Aside from recently purchased items (and items with the same product code of recently purchased items), my main model solely relies on the local popularity among the nearest 10,000 customers. 1.I obtained user's latent expression by getting the mean of the item2vec embedded expression (32 dimentions) of the items that the customer purchased in the past, 2.then selected 10,000 nearest customers for EACH customer based on the cosine similarity of user latent expression, 3.then picked top 300 last week popular items among 10,000 nearest customers as candidates. <br>
<br><br>
As for single model, this approach got the best score among three models(Private LB 330-ish).</p>
<h2>Model3</h2>
<p>Firstly, I created 4 types of clusters for each customer based on the techniques bellow.<br>
(Article-based embeddings were averaged by each dimension of past bought articles to create customer embeddings)</p>
<ul>
<li>Swin-Transformer Image embeddings</li>
<li>Age of each customer</li>
<li>item2vec based article embeddings</li>
<li>postal_code2vec (inspired by item2vec) embeddings</li>
</ul>
<p>
After the clustering process, each customer will belong to 4 different clusters. <br>
For each cluster, I created an article ranking for the past 30 days, and made a weighted overall ranking depending on which clusters each customer belonged to.<br>
Recent bought article_ids and product_codes were weighted heavier according to the customer.
From the overall ranking, Model3 took the top 350 articles per user.
</p>
<h1>Step2: Feature Engineering</h1>
<p>For each of the candidates, we created features as bellow.</p>
<ul>
<li>deep learning based latent expressions<ul>
<li>Articles embedded expresion by BERT4REC</li>
<li>Articles embedded expresion by LightGCN</li>
<li>Detailed description's latent expression by Roberta (plus PCA for dimensionality reduction)</li>
<li>Swin-Transformer based image features</li></ul></li>
<li>cosine similarity of the article and the customer based on swin-transformer embeddings and item2vec embeddings</li>
<li>number of past purchases/interval_days by the user for the candidate article_id, product_code, etc</li>
<li>number of past purchases/interval_days by the cluster(clustered using item2vec) of the relevant user for the candidate article_id, product_code, etc</li>
<li>article sales forecasts (forecasted by LightGBM regression model) for the corresponding week</li>
</ul>
<h1>Step3: Model</h1>
<p>All three models were trained by LightGBMRaker.<br>
Model1 was trained with 14 weeks,and model2/3 were only trained by using candidates from the last 1 week.</p>
<h1>Step4: Ensembling</h1>
<p>Ensembling was simply based by using the methodology used in many of the public notebooks (with a slight change).</p>
<pre><code>def cust_blend(dt, W = [1,1,1], base=3): # base option is added
#Create a list of all model predictions
REC = []
REC.append(dt['model1'].split())
REC.append(dt['model2'].split())
REC.append(dt['model3'].split())
res = {}
for M in range(len(REC)):
if type(REC[M]) == list:
for n, v in enumerate(REC[M]):
if v in res:
res[v] += (W[M]/(n+base))
else:
res[v] = (W[M]/(n+base))
res = list(dict(sorted(res.items(), key=lambda item: -item[1])).keys())
return ' '.join(res[:12])
</code></pre>
<h1>Failures</h1>
<ul>
<li>All neural network driven approaches discussed in the academic paper failed for us. Recbole and recommenders gave us marginal boost in this competition.</li>
<li>Increasing training weeks worsened the local scores for model2 and 3.</li>
<li>catboost (It's score was worse compared to LightGBM)</li>
<li>general MLP</li>
</ul> |
H&M Personalized Fashion Recommendations | 16th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: H&M Personalized Fashion Recommendations
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks my teammate Ryan <a href="https://www.kaggle.com/wj19971997" target="_blank">@wj19971997</a>, we got 16th place.<br>
Thanks to H&M and Kaggle. This is a wonderful competition.</p>
<h1>Framework</h1>
<p><img src="https://postimg.cc/hJYnVpPJ" alt="Framework"></p>
<h1>Data-splitting</h1>
<p>We split the data into 3 groups: <br>
cg1(customer group 1) is the users with transactions in last 30 days; <br>
cg2(customer group 2) is the users without transactions in last 30 days, <br>
but with transactions in history. <br>
cg3(customer group 3) is the users without transactions in history. </p>
<ul>
<li>cg1 and cg2: multi-recalls + rank.</li>
<li>cg3: popular items recall.</li>
</ul>
<h1>Recalls</h1>
<ul>
<li>popular items recall</li>
<li>repurchase recall</li>
<li>binaryNet recall</li>
<li>ItemCF recall</li>
<li>UserCF recall</li>
<li>W2V content recall</li>
<li>NLP content recall</li>
<li>Image content recall</li>
<li>Category content recall</li>
</ul>
<p>Each recall method will recall 100 items for every user, then drop duplicates.</p>
<h1>Rank</h1>
<h2>Feature Engineer</h2>
<h3>Item Feature</h3>
<p>groupby article_id agg cols calculate statistics</p>
<ul>
<li>cols: customer_id, price, sales_channel_id, and so on.</li>
<li>op: 'min', 'max', 'mean', 'std', 'median', 'sum', 'nunique'</li>
</ul>
<h3>User Feature</h3>
<p>groupby customer_id agg cols calculate statistics</p>
<ul>
<li>cols: price, article_id, sales_channel_id, and so on.</li>
<li>op: 'min', 'max', 'mean', 'std', 'median', 'sum', 'nunique'</li>
</ul>
<h3>Interaction Feature</h3>
<ul>
<li>count of user-item purchased in different window(1day, 3days, 1week, 2weeks, 1month).</li>
<li>The time-diff since the user last purchased the item</li>
</ul>
<h3>other features</h3>
<ul>
<li>ItemCF score</li>
<li>BinaryNet score</li>
</ul>
<h3>Model</h3>
<ul>
<li>lightgbm ranker</li>
<li>lightgbm binary</li>
</ul>
<h1>Ensemble</h1>
<p>Ref to <a href="https://www.kaggle.com/code/tarique7/lb-0-0240-h-m-ensemble-magic-multi-blend" target="_blank">this link</a></p> |
H&M Personalized Fashion Recommendations | 17th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: H&M Personalized Fashion Recommendations
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, thank you to the competition organizers for a great competition.</p>
<p>I am very disappointed to have missed the gold medal, but I gained a lot of experience through this competition!</p>
<p>I will share my solution, including the thought process that led me to try that solution and the methods that did not work.</p>
<h1>Overview</h1>
<p>Since GBDT model was used in the top solutions of the past recommendation competitions on Kaggle (like <a href="https://www.kaggle.com/competitions/instacart-market-basket-analysis" target="_blank">this</a>), I tried GBDT model early in the competition and got good scores.</p>
<p>I did not use the ranking models such as LGBMRanker and CatboostRanker (loss: YetiRank) because solving this task as a classification problem provided better results. Finally, I used Catboost, which has a high prediction speed.</p>
<h1>Data Handling</h1>
<p>I used only the last week of transaction data (2020-09-16 to 2020-09-22) for validation.</p>
<p>I created features using non-validation data (~2020-09-15) and created 0/1 flag based on whether or not the item was purchased during the validation period (2020-09-16 to 2020-09-22).</p>
<p><strong>CV Strategy</strong><br>
5fold / GroupKfold(customer_id)</p>
<h1>Candidate Generation</h1>
<p>I generated a total of 27,341,620 candidates for 68,984 customers.</p>
<ul>
<li>top 300 popular items*</li>
<li>previously purchased items</li>
<li>items with the same product code as previously purchased items</li>
</ul>
<p>(*) I used the top 300 popular items from 2020-09-16 to 2020-09-22 instead of from 2020-09-09 to 2020-09-15. As a result, the difference between LB and CV was relatively small compared to other competitors. (CV: 0.0385, Public LB: 0.0326, Private LB: 0.0330)</p>
<p>For candidate generation, I tried the following other methods, but they did not work.</p>
<ul>
<li><br>
top 300 popular items by age or gender</li>
<li><br>
items with the same <code>department_no</code> , <code>section_no</code> and <code>product_type_no</code> as previously purchased items<br>
(This idea comes from the fact that 13.8% of customers purchased items with the same <code>department_no</code>, <code>section_no</code> and <code>product_type_no</code> within 3 weeks.)<br>
(Reference: <a href="https://www.kaggle.com/code/lichtlab/do-customers-buy-the-same-products-again" target="_blank">this notebook</a>)</li>
</ul>
<h1>Feature Engineering</h1>
<h5>Customer features</h5>
<ul>
<li>customer attributes (<code>age</code>, <code>gender</code> etc)</li>
<li>number of repeats of the same item in the past by each customer</li>
<li>average purchase span / difference between average purchase span and last purchase date</li>
<li>mean price / max price</li>
<li>mean channel_id by each customer</li>
</ul>
<h5>Article features</h5>
<ul>
<li>article attributes (<code>product_code</code>, <code>section_no</code> etc)</li>
<li>number of sales per week (week0~week5)</li>
<li>number of sales per day (day0~day6)</li>
<li>mean channel_id by each article</li>
</ul>
<h5>Customer x Article features</h5>
<ul>
<li>last purchase date with the same attributes as the candidate item</li>
<li>difference between mean channel_id by each customer and by each article</li>
<li>purchase rate of the candidate item by customers of the same age</li>
<li>percentage of items with the same attributes as the candidate item among the customer's past purchases</li>
<li>CF features (reference: <a href="https://www.kaggle.com/code/poteman/hm-item-cf" target="_blank">this notebook</a>)</li>
</ul>
<p>By adding this CF features, the score improved to CV: 0.0385, Public LB: 0.0326.<br>
However, when I created too many CF features with slightly different conditions, for some reason the CV went up but the LB went down. (CV: 0.0397, Public LB: 0.0319)</p>
<h1>Model</h1>
<p>I trained all candidates in one Catboost model.<br>
I also tried training a different model for each reason for candidate generation, but the performance did not improve.</p>
<h5>Hyperparameter Tuning</h5>
<p>Since this data was imbalanced (positive samples: 27272479, negative samples: 69141), I set <code>scale_pos_weight</code> parameter.</p>
<p>If I follow the <a href="https://catboost.ai/en/docs/references/training-parameters/common#scale_pos_weight" target="_blank">official documentation</a>, it would be 394(=27272479/69141), but to avoid overfit, I set it to 100. Setting <code>scale_pos_weight</code> contributed significantly to the result.</p>
<p>Since the number of candidates and the number of positive samples differ from customer to customer, I tried to set the weight accordingly, but it did not work.</p>
<h1>Ensemble</h1>
<p>I ensembled several models using the code in <a href="https://www.kaggle.com/code/titericz/h-m-ensembling-how-to" target="_blank">this notebook</a>. Weight was all set to 1 to avoid overfit to the LB.</p>
<h1>Post Process</h1>
<p>Of the 1,371,980 customers in sample_submission.csv, 9699 customers have no data in transactions_train.csv. </p>
<p>For the following reasons, I assumed that all of these 9699 customers would have made a purchase during the prediction week.</p>
<ul>
<li>5572 customers purchased items for the first time in week0.</li>
<li>the data shows that a discount sale has been held on the last Saturday in September for the past two years, and the number of customers was higher compared to other weeks.</li>
</ul>
<p>Although customer attributes and number of recent sales were included in the features of Catboost model, the prediction results were not convincing. So I overwrote the predicted results for new customers with the Top 12 most popular items.</p>
<ul>
<li><br>
Looking at last year's data, the percentage of online purchases increased significantly on the last Saturday in September due to a discount sale.<br>
Therefore, I tried post-processing with the hypothesis that if the similar discount sale was held on the last Saturday of September in 2020, products with an originally high ratio of online purchases would be sold well, but it did not work.</li>
</ul>
<h1>Environment</h1>
<p>All code was run on Colab Pro+.</p>
<hr>
<p>I am still learning machine learning, so if you have any suggestions or advice, I would be glad to hear them :)</p>
<p>I'll upload the code soon.</p>
<p>Thank you so much.<br>
I will actively post discussions and codes in the future!</p>
<p>Exactly!<br>
Since the evaluation in LB is only done for customers who purchased between 2020-09-23 ~ 2020-09-29, I only trained on customers who purchased in week 105 (2020-09-16 ~ 2020-09-22).</p>
<p>Thank you so much!</p> |
H&M Personalized Fashion Recommendations | 1st place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: H&M Personalized Fashion Recommendations
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>yes,equal to the number of positive samples in the top100 recall items</p>
<p>I removed duplicate purchases by each customer per week,but keep them if they existed in different weeks.I retrained full dataset with more rounds,about 1.1X .</p>
<p>adjusting top N of each retrieval strategie and strategies priority manually ,then merge them to select top 100,optimize the hit number(positive samples)</p>
<p>cold start means new articles that have not purchased before</p>
<p>I installed cudf in my windows but failed in the past.Just use cpu.I used joblib to parallel computing and multiple notebooks to process data.</p>
<p>yes,ike a pre-ranker</p>
<p>train a logistic regression model to retrieve from top 1000 popular candidates to 50~200 candidates</p>
<p>you can see this article<br>
<a href="https://songxia-sophia.medium.com/collaborative-filtering-recommendation-with-co-occurrence-algorithm-dea583e12e2a" target="_blank">https://songxia-sophia.medium.com/collaborative-filtering-recommendation-with-co-occurrence-algorithm-dea583e12e2a</a></p>
<p>we don't need to take same N for each strategy,make sure total candidates is 100.At least we have top 100 popular items for all users.</p>
<p>just most popular items.I have not checked but I think everyone have no big difference of new customers' score because it's hard to predict.</p>
<p>the simplest model is using last week data as samples of ranking model,recall top N candidates from previous data by rules or model,the overlap of recall data and last week data is positive samples,other recall data is negative samples.<br>
I sugguest you read the code of kernel someone have publiced their code.</p>
<p>it's feature engineering, cosine similarity between user's latest purchased ,2nd latest purchased items and candidate item</p>
<p>cosine similarity between user's latest purchased ,2nd latest purchased items and candidate item</p>
<p>different N by adjusting manually to optimize the hit number</p>
<p>you can refer to this github, nodes are user and item. <a href="https://github.com/shenweichen/GraphEmbedding" target="_blank">https://github.com/shenweichen/GraphEmbedding</a>.</p>
<p>we don't need to set the target variables for different strategies,just merge them to top 100 deduplicated candidates for each user,if user-item pair existed in next week,the target is 1 else 0.</p>
<p>Thanks to H&M and Kaggle Team giving us such a wonderful competition.Thanks my team mate <a href="https://www.kaggle.com/h4211819" target="_blank">@h4211819</a> finally we got 1st place.This competition has no leak,stable cv-lb correlation,and infinite possibilities to improve, we really enjoyed exploring the accuracy boundaries of infinite possibilities. </p>
<h1>Overview</h1>
<p>The most interesting part of this competition is we need to generate train and test by ourself, the candinate generation strategy is the key to beyond limitation of accuracy ,good feature engineering or modeling could be close to the limitation .Our solution is using various retrieval strategies + feature engineering + GBDT, seems simple but powerful.</p>
<p>We mainly generate recent popular items because fashion changing fast and has seasonality,tried to add cold start items but they will never be ranked to top12 due to lacking of interaction information.<br>
user and item interaction information are always the most important of recommendation problem,the features we created are almost interaction features,image and text features didn't help but should be useful for cold start problem.</p>
<p>Almost 50% users have no transactions in recent 3months,so we created many cumulative features for them,and last week,last month,last season features for active users.</p>
<p>We use 6 weeks data as train,last week as valid,retrieve 100 caninates for each user,it has stable cv-lb correlation.We focus on improving single lightgbm model to the last week, cv is 0.0430 and lb is 0.0362.At last week I have to rent gcp's big memory server and vast.ai's gpu server to run bigger models to get higher accuracy.</p>
<p><a href="https://postimg.cc/xXLGHd2r" target="_blank"><img src="https://i.postimg.cc/bwBm7GVv/overview.png" alt="overview.png"></a></p>
<h1>Retrieval(aka: candidate generation | recall)</h1>
<p>At early stage,we focus on increasing the hit number of retrieved 100 candidates,try various strategies to cover more positive sample. </p>
<table>
<thead>
<tr>
<th>WeekNo</th>
<th>HitNum@100</th>
</tr>
</thead>
<tbody>
<tr>
<td>2020-09-16</td>
<td>39142</td>
</tr>
<tr>
<td>2020-09-09</td>
<td>38427</td>
</tr>
<tr>
<td>2020-09-02</td>
<td>41019</td>
</tr>
</tbody>
</table>
<h1>Ranking</h1>
<ul>
<li><p><strong>Feature Engineering</strong><br>
basiclly,features are created base on retrieval strategies,create user-item interaction for repurchase,create collaborative filtering score for itemcf,create similarity for embedding retrieval,create item count for popularity……</p>
<table>
<thead>
<tr>
<th>Type</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>Count</td>
<td>user-item, user-category of last week/month/season/same week of last year/all, time weighted count…</td>
</tr>
<tr>
<td>Time</td>
<td>first,last days of tranactions…</td>
</tr>
<tr>
<td>Mean/Max/Min</td>
<td>aggregation of age,price,sales_channel_id…</td>
</tr>
<tr>
<td>Difference/Ratio</td>
<td>difference between age and mean age of who purchased item, ratio of one user's purchased item count and the item's count</td>
</tr>
<tr>
<td>Similarity</td>
<td>collaborative filtering score of item2item, cosine similarity of item2item(word2vec), cosine similarity of user2item(ProNE)</td>
</tr>
</tbody>
</table></li>
<li><p><strong>Downsampling</strong><br>
If we retrieve 100~500 candidates for each user,the negative samples number is very big,so negative downsampling is must,we found 1 million ~ 2 million negative samples for each week has better performace .<br>
<code>neg_samples = 1000000;seed = 42
train[train['label']>0].append(train[train['label']==0].sample(neg_samples, random_state=seed))
</code></p></li>
<li><p><strong>Model</strong><br>
Our best single model is a lightgbm(cv:0.0441,lb:0.0367),finally we trained 5 lightgbm classifier and 7 catboost classifier for ensemble(lb:0.0371).catboost's lb score is much worse than lightgbm,lightgbm has very stable cv-lb correlation.<br>
<a href="https://postimg.cc/ZBmJRVLt" target="_blank"><img src="https://i.postimg.cc/V6rvKhr6/cv.png" alt="cv.png"></a></p></li>
</ul>
<h1>Optimization</h1>
<ol>
<li>we use TreeLite to accelerate lightgbm inference speed (2X faster),catboost-gpu is 30X faster than lightgbm-cpu inference.</li>
<li>transform all the categorical features(including two way) to label encoding,use reduce_mem_usage,</li>
<li>create a <strong>feature store</strong>,save intermediate features files to dictionary , final features to feather,exsiting features will not be create again</li>
<li>split all the users to 28 group,inference simultaneously with multiple servers.</li>
</ol>
<h1>Machine</h1>
<p>I have a desktop with 128G RAM,64 vcore CPU,TITAN RTX GPU.It's enough to run most of our models to win.My teammate has a desktop with 64G RAM.At last week ,we use 300G RAM gcp instances.</p>
<p>you can use deepwalk, node2vec, LINE, ProNE…to train a user-item bipartite graph to output embeddings</p>
<p>I have a desktop with 128G RAM,64 vcore CPU,TITAN RTX GPU.It's enough to run most of our models to win.But bigger model need 300G RAM so I have to use gcp.</p>
<p>I tried XGBoost-gpu once ,worse than lightgbm,and slower than catboost,so I didn't try more.</p>
<p>I tried FM for recall and MLP,transformer for rank,didn't work.Luckily I gave up soon to focus on GBDT.</p>
<p>it's just a concept.<br>
What is a feature store?<br>
The short version - a data management layer for machine learning that allows to share & discover features and create more effective machine learning pipelines.</p>
<p>never heard of FIL library,thanks for sharing</p>
<p>about 0.002 up</p>
<p>Totally agree with you, technology should be appropriate for business not just seems modern or fashionable</p>
<p>public and private leaderboard data should be splitted by user groups,the data size is big enough to have a approximate distribution</p>
<p>no magic features,just add more and more useful features,finally we have 340+ features</p>
<p>Yes,although I spent much time on creating pipeline and feature store,it helped us do more experiments without bugs</p>
<p>28 groups only for test data, calculate HitNum@100 of all valid data </p>
<ol>
<li>manually adjust the different top N of different strategies to 100~500 to maximize the hit number.</li>
<li>we use the scores used in retrieval strategies as features for ranking model.</li>
</ol>
<p>30 minutes ~ 1 day due to the data size. total 250hours to resolve all the challenge.</p>
<p>one model for every user</p> |
H&M Personalized Fashion Recommendations | 23th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: H&M Personalized Fashion Recommendations
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congratulations to all the winners, and thanks so much for hosting such an interesting competition!<br>
And thanks for my teammate <a href="https://www.kaggle.com/zakopur0" target="_blank">@zakopur0</a> <a href="https://www.kaggle.com/irrohas" target="_blank">@irrohas</a> <a href="https://www.kaggle.com/negoto" target="_blank">@negoto</a> <a href="https://www.kaggle.com/dehokanta" target="_blank">@dehokanta</a> .<br>
This competitions is very hard for us. I'll share our team ZKMRD solution.</p>
<h3>summary</h3>
<p><img src="https://user-images.githubusercontent.com/43205304/167539304-0851ae6a-15c6-4944-999a-11a27453ee6e.png"></p>
<h3>Candidates</h3>
<p>We generate item candidates for each customers and each week by multi strategies.</p>
<ul>
<li>Most popular </li>
<li>Past purchased</li>
<li>User based CF(Collaborative Filtering)</li>
<li>Item based CF</li>
<li>Different color</li>
</ul>
<h3>Features</h3>
<p>This was a big part of the reason why we were able to increase our score in the last 2week.</p>
<p><strong>Customer static attributes</strong></p>
<ul>
<li>basic attributes in customers.csv</li>
</ul>
<p><strong>Customer dynamic attributes</strong></p>
<ul>
<li>purchase count</li>
<li>last purchase flag</li>
<li>Days/Week since last purchase</li>
<li>price/discount of purchased items</li>
<li>mean sales channel</li>
<li>purchase rate by item segment</li>
<li>repurchase rate</li>
</ul>
<p><strong>Article static attributes</strong></p>
<ul>
<li>basic attributes in articles.csv</li>
<li>Bert sentence vector</li>
</ul>
<p><strong>Article dynamic attributes</strong></p>
<ul>
<li>trend value</li>
<li>weekly popular ranking</li>
<li>purchase count (1day, 2day ago, last week..)</li>
<li>purchase count by segment (age, item group)</li>
<li>popular rank by segment (age, item group)</li>
<li>mean sales channel</li>
</ul>
<p><strong>CF features</strong></p>
<ul>
<li>score by item based CF</li>
<li>score/rank by LightGCN (GCN based CF model)</li>
</ul>
<h3>Dataset</h3>
<p>It was difficult to create a good dataset, but we were able to do so with the help of <a href="https://www.kaggle.com/paweljankiewicz" target="_blank">@paweljankiewicz</a> and <a href="https://www.kaggle.com/lihaorocky" target="_blank">@lihaorocky</a> 's comments. Many thanks.</p>
<p>Key points about dataset</p>
<ul>
<li>Use only candidate examples generated, not all positive examples</li>
<li>Use customers with at least one positive example in the candidates</li>
<li>Remove not for sales now items from candidates</li>
</ul>
<h3>Model</h3>
<p>Our team use LGBMRanker model. <br>
Setting is very simple. Nothing special.</p>
<h3>CV</h3>
<p>The CV strategy is illustrated in the figure above.</p>
<ul>
<li>create feature and candidate: Past week from train</li>
<li>train: 100w-103w</li>
<li>valid: 104w</li>
</ul>
<p>When creating submission, we use 100-104w as train.</p>
<h3>Post Processing</h3>
<p>We apply two kind of post processing to all model.</p>
<ol>
<li>Use age based most popular items as predictions of cold start customers.</li>
<li>Remove the low offline sales ratio items from the customers who prefer offline.</li>
</ol>
<h3>Ensemble</h3>
<p>Blend 6 model predictions which is created by the different candidates and features.<br>
Ensemble is effective for our model. LB score increased by approx. 0.001.</p>
<p>Thanks! I use Google Slides and get icon from here.<br>
<a href="https://icooon-mono.com/?lang=en" target="_blank">https://icooon-mono.com/?lang=en</a></p> |
H&M Personalized Fashion Recommendations | 25th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: H&M Personalized Fashion Recommendations
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks Kaggle and H&M for hosting the really exciting competition. This is my first time where I devoted significant portion of my spare time on Kaggle platform, and I learned a lot from this competition. I appreciate all the great notebooks and dicussions, which have helped me a lot construct my strategy.</p>
<p>My strategy was relatively simple, did nothing special, just followed the <a href="https://www.kaggle.com/paweljankiewicz" target="_blank">@paweljankiewicz</a>‘s <a target="_blank">great suggestions</a>.</p>
<p>Candidate generation</p>
<ul>
<li>repurchase (still wonder why this is so important in this competition though…)</li>
<li>Several collaborative filtering<ul>
<li>ItemkNN</li>
<li>Randomwalk with restart</li>
<li>Matrix factorization with negative sampling</li></ul></li>
<li>For ItemkNN, I used all the transactions with decaying weight as the timesamp goes past. This weighting reduces cold users while maintaining the recall. ItemkNN gave me the largest gain.</li>
<li>I use Julia languate to implement them</li>
<li>Candidates from (global/segmented) popular items also help</li>
</ul>
<p>Reranking</p>
<ul>
<li>The model is LightGBM LambdaMART and the features are based on popularity, user-item interaction and trending. This part is probably similar to many competitors.</li>
<li>Rank features (rank of the item in generated candidates) are important.</li>
<li>As I increased candidates, it became more and more important to adjust positive sample ratio and sampling negatives to deal with the label imbalance (in fact there are several dicussions to suggest this)</li>
</ul>
<p>My environment</p>
<ul>
<li>At first, I used cudf with the notebook instance attached with a GPU</li>
<li>As I generated more and more candidates and features, even cudf got difficulty to handle such a large table, so I swithced to the Google BigQuery. I moved all my preprocess codes to SQLs. BigQuery provides me much more scalable way to handle tabular data.</li>
<li>Since I started to focus on this competition at the bigining of the April, I had only a month to tackle on. After switching to BigQuery, I also moved to GCP vertex training, run lots of experiments in parallel with on-demand high-memory machine (finally with 512GB RAM). It was really useful to perform many trails-and-errors.</li>
</ul>
<p>My best model is</p>
<ul>
<li>CV: 0.03861</li>
<li>Public: 0.03225</li>
<li>Private: 0.03252</li>
</ul>
<p>Thanks.</p> |
H&M Personalized Fashion Recommendations | 26th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: H&M Personalized Fashion Recommendations
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, thanks to Kaggle team, H&M staff and all competitors.<br>
It was so interesting and greatly well-designed competition (with a little shake) that I was excited to join.<br>
It is grateful that our team (<a href="https://www.kaggle.com/ruiknemui" target="_blank">@ruik</a> and me) got the highest rank in private leaderboard, but we also feel "the wall" to Gold medal.<br>
We'd like to keep efforts until we become Master.<br>
Anyway, let me share our solution. I'm so happy if you give comments or advises for our approach.</p>
<h2>Pipeline</h2>
<p>Not so different from others, very simple.</p>
<ol>
<li>Generate base candidates for each customer from their historical transactions in last N(=96) weeks. Transactions of articles which don't appear in historical transactions are not included in this time, so this approach is based on <strong>"What will they repurchase?"</strong></li>
<li>Add candidates</li>
<li>Create features</li>
<li>Fit/predict by LGBMRanker</li>
<li>Postprocess</li>
</ol>
<h2>How to add candidates</h2>
<ul>
<li>Calculate article embeddings using word2vec as item2vec, then select N(=48) candidates whose embeddings have high similarity with predicted vector as "next purchase".</li>
<li>Add candidates have <strong>product codes</strong> as same as those of "pair" with each customer's last N(=3) transactions. (thanks to <a href="https://www.kaggle.com/code/titericz/article-id-pairs-in-3s-using-cudf" target="_blank">Mr.Giba's notebook</a>)<br>
Please note that these method are applied to customers <strong>who have enough historical transactions</strong>.</li>
</ul>
<h2>Create features</h2>
<p>Not so different from winners.</p>
<ul>
<li><strong>article features</strong><ul>
<li># of sales in last N week (for not only article itself but also attributes which article is belonged to, like "department_name") + that segmented by age</li>
<li>max/min/median price purchased (smoothing with global median)</li>
<li>online purchased rate</li>
<li># of elapsed days from last purchase among all customers</li>
<li># of elapsed days from article's first sell</li></ul></li>
<li><strong>customer features</strong><ul>
<li>age: binned and adjusted (10 if age < 20, 60 if age >= 60)</li>
<li>"club_member_status","fashion_news_frequency"</li>
<li>attribute: estimated from index_code of articles they purchased (ex. if a customer usually purchases "Ladieswear", she is estimated as "Lady")</li>
<li>max/min/median price purchased (smoothing with global median)</li>
<li># of elapsed days from customer's last purchase</li></ul></li>
<li><strong>customer x article features</strong><ul>
<li># of elapsed days from last purchase for each article/customer combination</li>
<li># of each article/customer pair appeared in historical transactions</li>
<li>purchased rate(transactions) of article for each age/attribute</li>
<li>ratio of article's price(max/min/median) to customer's price(max/min/median) (ex. if ratio of article's price min to customer's price max is over 1, we can estimate <strong>"he has no budget to purchase the item."</strong>)</li></ul></li>
</ul>
<h2>Fit/predict by LGBMRanker</h2>
<p>We divide customers to 3 folds, 30/35/35% (actually, at first aiming to conduct experiments quickly).<br>
Next, we train and predict LGBMRanker for each fold of customer. Then apply weighted averaging (0.3/0.35/0.35) to prediction.<br>
Train/valid period is only 1 set, valid set is last 1 week and train set is last 3 weeks before valid.<br>
We retrain model with last 3 weeks data when predicting test after parameter tuning with optuna.</p>
<h2>Postprocess</h2>
<p>We apply postprocess to prediction of <strong>customers who have no or less than 12 candidates (transactions) in last N(=96) weeks.</strong></p>
<ul>
<li>We predict top 12 sales in last 1 week among customers with no candidates in last N(=96) weeks to those customers (+0.0002)</li>
<li>We predict top 12-K sales in last 1 week for each age group to customers with K(<12) candidates (+0.0001)</li>
</ul>
<p>These method has improved score about 0.0003. (slight, but important improvement)</p>
<h2>What did not work</h2>
<ul>
<li>Candidates<ul>
<li>Using pictures (adding candidates which have most similar pictures and have the same "product_type_no", "index_group_no")</li>
<li>Using detail description (as same as using pictures)</li></ul></li>
</ul>
<p>I think what these method did not work is due to my mistake in implementation. When adding these candidates, it looks leaked. I have realized I have to make much more efforts to utilize ideas.</p>
<ul>
<li><p>Features</p>
<ul>
<li>FN, active, postal_code (even count frequency)</li>
<li>price feat, online or offline purchased</li>
<li>on sale label: it would be because most articles are on sale for a few days suddenly, not continously</li></ul></li>
<li><p>Model</p>
<ul>
<li>LGBMClassifier (predict purchase or not): In fact, it worked well when LB is under 0.03. Score has jumped up from 0.0274 to 0.0287. Mysterious…</li></ul></li>
<li><p>Postprocess</p>
<ul>
<li>Replace top sales to "less confident" prediction: I did not have enough time to try, so there would be better way…</li></ul></li>
</ul> |
H&M Personalized Fashion Recommendations | 2nd place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: H&M Personalized Fashion Recommendations
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>In addition to the recent top-selling prodicts, I will also select the top-selling products under the section id that the user has historically purchased</p>
<p>1、For example, in the sorting stage, multiple windows of different sizes will be selected. In the recall stage, I only select a window of one size, and some complex features are missing, such as itemCF related features. The recall phase uses the binary objective and the sorting phase uses the lambdarank objective<br>
2、It is calculated by numerical value. For example, I know the last time the user bought this product, and I also know the average interval of the purchase time of this product. The sum of the two is the estimated time of the next purchase.</p>
<p>Congrats to the 1st place winners and all the participants. Thanks to the competition organizers for this interesting competition. Here is my team mate <a href="https://www.kaggle.com/paweljankiewicz" target="_blank">@paweljankiewicz</a> and my solution.</p>
<h2>wht1996 solution</h2>
<p>I didn't do a good job of recalling the product candidate. At the beginning, I tried recalling methods such as itemCF, but later found that the effect was not even as good as directly selecting popular products. Therefore, my solution is to directly select aboout 600 most popular products for each user (there will be some very simple strategies to refer to the user's history). Then train a lgb model and select 130 candidates for each user according to the model score, and then add all the products purchased by the user in the history as the final candidate</p>
<p>After that, train another lgb model to get the final result. Compare to the lgb model in recall stage, the features will be more complex, features can be mainly divided into the following groups:</p>
<ul>
<li>User basic features: including num, price, sales_channel_id.</li>
<li>Product basic features: statistics based on each attribute of the product, including times, price, age, sales_channel_id, FN, Active, club_member_status, fashion_news_frequency, last purchase time, average purchase interval.</li>
<li>User product combination features: statistics based on each attribute of the product, including num, time, sales_channel_id, last purchase time, and average purchase interval.</li>
<li>Age product combination features: products popularity under each age group.</li>
<li>User product repurchase features: whether user will repurchase product and whether product will be repurchased.</li>
<li>Higher-order combinatorial features: For example, predict when the user will next purchase the product.</li>
<li>itemCF feature: calculate the similarity of each item through itemCF, and then calculate the score that whether user will buy the product.</li>
</ul>
<p>The score of the single model on the Leaderboard is 0.0355, after adding <a href="https://www.kaggle.com/paweljankiewicz" target="_blank">@paweljankiewicz</a> candidates can be promoted to 0.0362, and the ensemble score is 0.0368.</p>
<h2><a href="https://www.kaggle.com/paweljankiewicz" target="_blank">@paweljankiewicz</a> solution</h2>
<p>I followed my advice from the thread mostly. I'm happy that my intution was correct. Overall until the merge with wht1996 I focused on generating a diverse set of recall strategies. The most interesting part of the solution wouldn't be the logic but that I have written the feature engineering part in Rust and create a mini framework for parallel feature extraction. I thought that I could get away with much more complex rules that way and it was still pretty fast.</p>
<h3>Candidate strategies</h3>
<p>I used all sorts of counters to measure the popularity of items in different groups:</p>
<ul>
<li>customer attributes: different combinations (what boosted my score significantly was including postal_code)</li>
<li>article attributes: different combinations and then intersect with the customer history - so if a customer bought an item with certain properties I looked for popular items with the same property<br>
The counters were used for different recency like: 1,3,7,30,90 days.</li>
</ul>
<p>Also I used:</p>
<ul>
<li>image similarities using MobileNet embedding</li>
<li>cooccurences (inside basket and outside = basket to basket)</li>
<li>random graph walk over item-user graph</li>
</ul>
<p>Features:</p>
<ul>
<li>customer / article attributes</li>
<li>all sorts of similarity based measures for example I calculated average similarity of items vs the items that the customer bought</li>
<li>streaks - this was inspired by some previous competitions posts - for many article attributes I calculated streaks measuring the number of times the customer bought a product, category, section etc in row.</li>
</ul>
<p>At the end I ended up with 1000 candidates per customer which was way too many. I had trouble to improve beyond 0.0340 but with wht1996 help I managed to improve the single model to 0.0348.</p>
<h3>Modeling</h3>
<p>I managed to write a lambdarankmap objective for LGBMRanker (mostly by copying the code from XGBoost) and it was slightly better than lambdarank (which uses NDCG objective). For the whole competition I trained the models using 2-3 months of transactions and last week to validate.</p>
<h3>What didn't work</h3>
<p>Deep learning models - I tried several recbole models but they weren't so good.</p>
<h2>How to improve competitions like this</h2>
<p>A large part of the competition was about predicting the availability of the products for different postal_codes. I think this was disguised as a recommendation system competition to some extent. For future competitions I would suggest:</p>
<ul>
<li>providing the state of articles - which are available and where</li>
<li>to minimize the calculation needed to submit I would suggest to provide a list of customers with transactions in the subimission week. Predicting customers which didn't have transactions was very computation heavy task. The reason is that the models are the same but creating features and predicting takes time.</li>
</ul>
<h2>How we cooperated</h2>
<p>The cooperation was smooth. We exchanged our solutions and candidates (with scores).</p> |
H&M Personalized Fashion Recommendations | 30th Solution Intro and Personal Summary | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: H&M Personalized Fashion Recommendations
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, thanks for the efforts of HM and Kaggle platform, who give us this wonderful experience~</p>
<p>Let me introduce my solution, and try to point out the shortages compared with the solutions with better performance.</p>
<ul>
<li><p>0.Solution Framework<br>
I used two stage strategy, recall and rank, just like most of other competitors.</p></li>
<li><p>1. Local valid dataset generation<br>
I only used the last week to create my local valid dataset<br>
The number of unique customer_id in last week only about 60k, but the size of total customer is 1300k in the full dataset, apparently, it will limited generalization of ranking step.</p></li>
<li><p>2. Recall stage<br>
I only rely on a single itemCF algorithm to do the recall work. I also used the popularity of items as an extra weight, which could boost mAP score helpfully.<br>
Other competitors usually leverage multi-way recall strategy. Besides itemCF, u2i, methods like graph model, SVD, history item are also included in their recall stage. <strong>I guess that is the main reason why my final rank is only 30th.</strong><br>
Actually, I tried deepMatch, but it not worked well. I guess the default “last item train strategy” is not suitable for HM dataset. I believe the reason is a lot of customers had long gap between the last purchasing and their prior purchasing, which made it difficult to train the item embeddings effectively. If modify the “last item train strategy”, it could be more useful, but I do not have extra time to do so.<br>
My final recall stage score around 0.028 in LB</p></li>
<li><p>3.Rank stage<br>
I use 5 folds lightGBM with lambda rank in this part. <br>
I tried NN models like DIN and deepFM, but they did not beat LGB.<br>
My feature engineering focused on the interactions between customers and items. It included item frequency, gap of purchasing time and category feature statistic (type , color, department, etc ).<br>
It is worth noting that, I also used vector which is produced from image files in rank part to evaluate the similarity between history and candidate items, but it doesn’t bring improvement largely.</p></li>
</ul>
<p><strong>My Rank stage got 0.031 in LB finally, it only brings extra 0.003 mAP score to recall part.
I believe the local dataset generation strategy limited the power of ranking models.</strong><br>
Leaders of LB usually used 6 weeks or more data as trainset. Apparently, more data could bring extra benefits to enhance generalization of rank part .</p>
<ul>
<li>4.Team<br>
I have to say, a good team is valuable. It is quite fatiguing for finish this competition by myself.<br>
I highly recommend everyone should try to paly Kaggle with others (If you don’t mind solo gold, haha). It not only may bring higher rank, the encouragement from team members is necessary at any time.</li>
</ul> |
H&M Personalized Fashion Recommendations | 3rd place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: H&M Personalized Fashion Recommendations
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>That depends. More data can reduce overfitting. In my case, 18weeks vs 2weeks is about 0.0005-0.0010 up</p>
<p>Thanks!<br>
1.Yes. But I haven't generate candidates using bpr, just being a feature in the ranking model (didn't have more spare time and felt enough high recall rate ~18%)</p>
<p>2&3. Busy with work now. May be updated this weekend</p>
<p>Yes! Got the Master degree 4 years ago in ECNU🤝</p>
<p>Thanks!</p>
<ol>
<li>120GB</li>
<li>Feature processing is only on cpu and model training is on gpu</li>
</ol>
<p>Thank you!</p>
<ol>
<li>No, there aren't duplicated (customer_id, article_id) in my prediction. A customer may purchase the duplicated article in the next week (such as three pairs of sock), but it's really hard to predict the amount of duplication. </li>
<li>The latter one.</li>
</ol>
<p>updated Q2&Q3 <a href="https://www.kaggle.com/hyungeunjo" target="_blank">@hyungeunjo</a> </p>
<ol>
<li>Updated <code>Ranking-Samples</code></li>
<li>Feature 2 is like <code>transaction_7days.groupby(['customer_id', 'article_id']).size()</code>, meaning the count of the user purchasing the target item in the last 7 days; <br>
Feature 3 is like <code>transaction.groupby(['customer_id', 'article_id']).min('days')</code>, meaning how many days are since the user purchased the target article in last time.</li>
</ol>
<p>At First, thank kaggle staff and h&m team for organizing this interesting competition. As a new kaggler (but an old RECSYSer), I enjoy this game very much and very very happy to get this place. And, thank all the competitors and the community, from whom I learned very very much, which is a bigger gain for me than the $8000🤓. At last, I must thank the covid-19 that isolated me at home for two months😂, which forced me to focus on the competition in the spare time because of no outting activities, especially during May day holiday.</p>
<p>As my pipeline is very similar to others' (candidate generation + ranking) and I notice most part of my solution (either the recall methods or the ranking features/models) has been used and described in other guy's threads, I will just briefly introduce something different and useful in my case here. (LB scores in the following are all single model Private scores.)</p>
<ul>
<li><strong>Features about recall strategy</strong>: whether this article is recalled by <code>strategy_name</code> and the rank of this article under the <code>strategy_name</code> (because I've ordered the candidate under each recall strategy by some metric, the rank num can reflact the relevance). These features, plus an expansion of candidates num for each user from dozens to hundred, boost my LB score <strong>from 0.02855 to 0.03262</strong>, which changes the color of medal from silver to near gold. In addition, if I only increase the recall num and don't adding the recall features, the CV score is very very poor. </li>
<li><strong>BPR matrix factorization</strong>: The similarity of user and article is important for ranking. Features about item2item similarity, such as count of bought together and word2vec, are commonly used in most competitors' model and those also boost my score a lot, but what improves my model most is the user2item similarity obtained from BPR matrix factorization. This BPR model is trained with all the transactions before the target week (I've trained one BPR for each week) using <a href="https://implicit.readthedocs.io/en/latest/bpr.html" target="_blank">implicit</a>. The auc of BPR similarity is ~0.720, while the auc of the whole ranking model is ~0.806 and the best auc of other single feature is ~0.680. At last, this single similarity feature boost my LB score <strong>from 0.03363 to 0.03510</strong>, which takes me from gold medal area to the prize area.</li>
</ul>
<p>Any questions are welcome. Thank you for reading!</p>
<p>-----------------------UPDATE 2022.05.15------------------------------</p>
<h1>Recalling</h1>
<ul>
<li>the most popular items</li>
<li>items that the user recently purchased</li>
<li>relative items to what the user recently purchased</li>
<li>popular items under user's attributes (age, gender and so on)</li>
<li>items which have the same prod_name with what the user recently purchased</li>
</ul>
<h1>Ranking</h1>
<h3>Samples</h3>
<ul>
<li>The generated candidates which the customer didn't purchase in the target week are labeled with 0 and those which the customer purchased are labeled with 1</li>
<li>Because of big imbalance (pos:neg is about 1:300), downsampling is adopted. I just keep the negative samples with amount of <code>30*len(pos_samples)</code> </li>
<li>The pos samples are only the ones which are recalled in the candidates and the other items that the user purchased in actual are not included.</li>
</ul>
<h3>Features</h3>
<ul>
<li>user and item static attributes</li>
<li>counts of user-item and user-item attributes in the last 7 days, 30 days and all transactions. </li>
<li>day diff of user-item and user-item attributes last seen in the transactions;</li>
<li>similarities between user and item:<ul>
<li>BPR matrix factorization user2item similarity</li>
<li>word2vec item2item similarity</li>
<li>jaccard similarity between the attributes of items that user purchased and the attributes of the target item</li>
<li>txt similarity and image similarity by the embeddings obtained from public pre-trained models</li></ul></li>
<li>item popularity:<ul>
<li>counts of item/item attributes purchased in the last 7 days, 30 days and all transactions;</li>
<li>time weighted popularity, where the weight is 1/days of now minus purchased date</li>
<li>weekly trend score from <a href="https://www.kaggle.com/code/byfone/h-m-trending-products-weekly" target="_blank">this kernal</a></li></ul></li>
<li>features about recall strategy:<ul>
<li>whether this article is recalled by <code>strategy_name</code> </li>
<li>the rank of this article under the <code>strategy_name</code></li></ul></li>
</ul>
<h3>Models</h3>
<p>The metric of each model is very similar. The final version is the average score of 16 models which were trained with different hyperparameters (learning rate, max_depth and even random seed).</p>
<ul>
<li>lightgbm</li>
<li>catboost</li>
<li>xgboost</li>
</ul>
<p>Thanks!</p>
<ol>
<li>18 weeks</li>
<li>Xgboost, Catboost and Lightgbm give similar CV scores, so the final submission is a mix of these models.</li>
</ol>
<p>Yes. Totally right!</p>
<ol>
<li>Yes</li>
<li>Both the ranking features you described was used in my final model. But If only talking about the boosting from 0.02855 to 0.0326, it's the latter one(ranking num) contributing most. In that submission, features about relevance measuring was not complete, and CV score was poor before adding the features about ranking num.</li>
</ol>
<p>Congrats on your 2nd place. Thank you for this comment and lots of valuable and impressive discussion during the competition!</p>
<p>Thank you. My pipeline consists of two part: candidate generation and ranking. At the beginning, I generated dozens of candidate articles for each user. Then, for a higher recall rate, I tried to increase the recall num of each candidate strategy, such as popular items 30->100, itemcf 20->50 and so on. As a result, the total count of candidates for each user increased to hundreds and the recall rates increased from ~10% to ~18%.</p>
<p>Both used the score described in <a href="https://www.kaggle.com/code/hervind/h-m-faster-trending-products-weekly" target="_blank">this kernal</a>. I only adjusted some parameters a very little. <a href="https://www.kaggle.com/hyungeunjo" target="_blank">@hyungeunjo</a> </p>
<p>It's about the similarity of the candidate artical and the user-purchased articals.</p>
<p>For example, if a user has purchased [a1, a2, a3] in the history, "word2vec item2item similarity" is the mean/sum/max of [cosine_sim(a1, candidate), cosine_sim(a2, candidate), cosine_sim(a3, candidate)].</p> |
H&M Personalized Fashion Recommendations | 46th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: H&M Personalized Fashion Recommendations
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>46th place solution (Item2Vec/Transformer)</h1>
<h1>Introduction</h1>
<p>Thank you to H & M, kaggle, and my rivals for the exciting competition.<br>
This post is a brief summary of my solution.<br>
Since it is English written by google translate, I think that there are some parts that are not connected, but if you have any questions, please feel free to ask.</p>
<h1>Overview</h1>
<p>This time, I designed the model using the 2-tower model using NN.<br>
The 2-tower model provides high performance recommendations by using two parts: the retrieval part, which narrows down candidates with a lightweight model, and the ranking part, which makes predictions with a highly accurate model.<br>
Specifically, the flow of prediction / learning is as follows.</p>
<ul>
<li><p>1st stage (retrieval part) Public: 0.02137 Private: 0.02296<br>
Item2Vec is used to narrow down the ranking of the number of articles appearing around the predicted date. (1)<br>
Extract items with a history in the past that have an interaction with another user most recently on a rule basis. (2)</p></li>
<li><p>2nd stage (ranking part) Public: 0.2779 Private: 0.02846<br>
Rerank (1) and (2) using Transformer.</p></li>
</ul>
<h1>Solution</h1>
<p>The model will be explained in the following three parts.</p>
<ul>
<li>Embedding part common to retrieval / ranking</li>
<li>Retrieval part</li>
<li>ranking part</li>
</ul>
<h2>Embedding part</h2>
<p>In the Embedding part, the pre-calculated image / natural language model embedding and the category embedding initialized with word2vec of gensim are mixed with <a href="https://arxiv.org/abs/2008.13535" target="_blank">DCNV2</a>.<br>
Each embedding was obtained as follows.</p>
<ul>
<li><p>image</p>
<ul>
<li>swin_large_patch4_window12_384_in22k + TruncatedSVD (1024 dim)</li>
<li>EfficientNetV2L + TruncatedSVD (1024 dim)</li></ul></li>
<li><p>Natural language</p>
<ul>
<li>ELECTRA LARGE (Enter by concatenating detail_desc and category name)</li>
<li>Sentence-T5 st5-11b (Enter by concatenating detail_desc and category name)</li>
<li>BERT Tokenizer + Tf-Idf + PCA (128 dim)</li></ul></li>
<li><p>Category</p></li>
<li><p>Word2vec by gensim</p></li>
<li><p>Table data</p>
<ul>
<li>price</li>
<li>Total quantity (same products are combined into one record for each day)</li>
<li>How many interactions for that user</li>
<li>Price statistics when compared to other days</li></ul></li>
</ul>
<p>Since these features were introduced before the implementation of other methods with greater effects, we have not verified the individual effects, but I feel that they were not very effective.</p>
<h2>Retrieval part</h2>
<p>In the Retrieval part, I narrowed down by the search vector that took the weighted average of the embedding of the article and the cosine similarity of the embedding of the product.</p>
<h3>Model</h3>
<h4>Target embedding</h4>
<p>I put the embedding of the article into several fully connected layers and converted it.</p>
<h4>Query embedding</h4>
<p>I used a weighted average weighted by time etc. for the embedding of the article.<br>
As for how to add weight, the weight was calculated by <code>(1/2) ^ relu (edge_feature)</code> with the feature such as elapsed time as edge_feature, and the weighted average was calculated.</p>
<p>I also considered Transformer using self attention, but it was meaningless in this part.</p>
<h3>Loss</h3>
<p>We calculated <a href="https://arxiv.org/abs/2004.11362" target="_blank">SupervisedContrastiveLoss</a> with 1 as the one with interaction and 0 as the negative example without it.<br>
At the same time, I used <a href="https://arxiv.org/abs/1905.00292" target="_blank">AdaCos</a>. The margin parameter is 0.</p>
<p>When the margin is greater than 0, all cosine similarity diverges negatively.<br>
Also, I was monitoring the cosine similarity of the regular case, but in the end it was about 0.3.<br>
In terms of ranking, the correct example out of 4096 samples was about 600th, and I felt that this task was difficult.</p>
<h3>Subsampling</h3>
<p>Batch segmented users by age 10 years, ensured that targets of the same age and same day were in the batch, and trained in chronological order.<br>
During training, I sampled up to 4096 articles from the top 10,000 appearances of articles in the last week of the group segmented by 10 years old.<br>
Sampling used <a href="https://stackoverflow.com/questions/43310075/sampling-without-replacement-from-a-given-non-uniform-distribution-in-tensorflow" target="_blank">Gumbel-max trick</a>.</p>
<p>At the time of inference, we calculated the cosine similarity for 10,000 items that appeared frequently in the last 90 days, which had somebody's purchase history in the last 7 days. (I also tried an approximate neighborhood search like ScaNN, but the time impact was negligible)</p>
<h2>Ranking part</h2>
<p>In the ranking, rank learning was performed using Transformer.</p>
<h3>Model</h3>
<p>We further used Transformer with the recommended symmetric article as Query and the user's browsing history as Key & Value. Again, there was no need for a self-attention type Encoder or a Decoder with more than one layer.<br>
Embedding obtained by this and the result of the intermediate layer calculated so far were put into DCNV2 and binary classification was performed.</p>
<h3>Loss</h3>
<p>loss is based on <a href="https://www.tensorflow.org/ranking/api_docs/python/tfr/keras/losses/ApproxNDCGLoss" target="_blank">ApproxNDCGLoss</a> and uses a unique loss with a penalty for each rank. ..<br>
We compared ApproxNDCGLoss (original), rank difference ** 1, and rank difference ** 2, and the rank difference ** 1 was the best result.</p>
<h3>Subsampling</h3>
<p>At the time of training, a maximum of 128 negative cases were created by Gumbel-max trick with the ratio of the cube of the cosine similarity as the probability. This gave good feedback to the model as it could take a reasonably weaker negative example than taking the top as it is.<br>
At the time of inference, we predicted the top 128 of the Retrieveval part and the past browsing history.</p>
<h1>Machine & Experiments</h1>
<p>I used a core i9-12900 CPU, 64GB RAM, and an RTX 3090 GPU.<br>
Of that, CPU and RAM were used about half, learning using all data was 1 epoch 9 hours, and the highest score was 2 epoch.</p>
<p>As an aside, the experiment number on the notebook was 159 at the end, and I feel that I spent a lot of effort.</p>
<p>I will update it from time to time. Feel free to ask questions!</p>
<h1>UPDATE</h1>
<p>I published a more detailed solution on the technical blog of my company. The explanation for those who did not participate in the competition and the part that I could not write in the discussion due to my lack of English ability are written. If you can read Japanese, please read it.<br>
<a href="https://future-architect.github.io/articles/20220602b/" target="_blank">H&M Personalized Fashion Recommendations 参加記 (46th/2952)</a></p>
<p><a href="https://future-architect.github.io/images/20220602b/H&M_46th_solution_overview.drawio.png" target="_blank">image of solution overview</a></p>
<p>Japanese version (same as English)</p>
<p>46th place solution (Item2Vec/Transformer)</p>
<h1>はじめに</h1>
<p>H&Mとkaggle、そしてライバルの皆さん、楽しいコンペをありがとうございました。<br>
この投稿は自身の解法の簡単なまとめです。<br>
google翻訳で書いた英語なのでつたない部分もあると思いますが、わからない点があれば気軽に質問してください。</p>
<h1>Overview</h1>
<p>私は今回、NNを用いた2-towerモデルを用いてモデルを設計しました。<br>
2-towerモデルは軽量のモデルで候補の絞り込みを行うretrievalパートと、精度の高いモデルで予測を行うrankingパートの2つを用いることで高パフォーマンスの推薦を提供します。<br>
具体的には予測・学習の流れは以下のようになります。</p>
<ul>
<li><p>1st stage (retrieval part) Public: 0.02137 Private: 0.02296<br>
予測日周辺でのarticle出現数のランキング上位に対しItem2Vecで絞り込みを行う。(1)<br>
直近でほかのユーザのinteractionがある、過去に履歴のあるアイテムをルールベースで取り出す。(2)</p></li>
<li><p>2nd stage (ranking part) Public: 0.2779 Private: 0.02846<br>
(1)、(2)に対しTransformerを用いてリランキングを行う。</p></li>
</ul>
<h1>Solution</h1>
<p>モデルについては以下の三つに分けて説明をします。</p>
<ul>
<li>retrieval/rankingで共通のEmbeddingパート</li>
<li>Retrievalパート</li>
<li>rankingパート</li>
</ul>
<h2>Embedding part</h2>
<p>Embeddingパートでは事前計算した画像・自然言語モデルのembeddingと、gensimのword2vecで初期化したカテゴリembeddingを<a href="https://arxiv.org/abs/2008.13535" target="_blank">DCNV2</a>でミックスしました。<br>
各embeddingは以下のように取得しました。</p>
<ul>
<li><p>画像</p>
<ul>
<li>swin_large_patch4_window12_384_in22k + TruncatedSVD (1024 dim)</li>
<li>EfficientNetV2L + TruncatedSVD(1024 dim)</li></ul></li>
<li><p>自然言語</p>
<ul>
<li>ELECTRA LARGE (detail_descとカテゴリ名を連結して入力)</li>
<li>Sentence-T5 st5-11b (detail_descとカテゴリ名を連結して入力)</li>
<li>BERT Tokenizer + Tf-Idf + PCA(128 dim)</li></ul></li>
<li><p>カテゴリ</p>
<ul>
<li>gensimによるword2vec</li></ul></li>
<li><p>テーブルデータ</p>
<ul>
<li>price</li>
<li>合計個数(同じ商品は日ごとに一つのレコードにまとめた)</li>
<li>そのユーザーにとって何回目のinteractionか</li>
<li>ほかの日と比較した際の値段の統計値</li></ul></li>
</ul>
<p>これらの特徴はほかのより大きな効果を持つ手法の実装前に導入したので個々の効果は検証していませんが、体感としてあまり効果はなかったように感じています。</p>
<h2>Retrieval part</h2>
<p>Retrievalパートではarticleのembeddingの加重平均をとった検索ベクトルと商品のembeddingのコサイン類似度により絞り込みを行いました。</p>
<h3>Model</h3>
<h4>Target embedding</h4>
<p>Articleのembeddingを何層かの全結合層に入れ変換しました。</p>
<h4>Query embedding</h4>
<p>Articleのembeddingについて時間等で重みづけした加重平均を使いました。<br>
重みのつけ方は経過時間等の特徴をedge_featureとして<code>(1/2) ^ relu(edge_feature)</code>で重みを計算し加重平均を計算しました。</p>
<p>self attentionを用いたTransformerも検討したのですが、このパートでは無意味でした。</p>
<h3>Loss</h3>
<p>interactionのあったものを1、そうでない負例を0として、<a href="https://arxiv.org/abs/2004.11362" target="_blank">SupervisedContrastiveLoss</a>を計算しました。<br>
また、同時に<a href="https://arxiv.org/abs/1905.00292" target="_blank">AdaCos</a>を用いました。マージンパラメータは0です。</p>
<p>マージンは0より大きくするとコサイン類似度がすべて負に発散しました。<br>
また、正例のコサイン類似度をモニターしていましたが、最終的には0.3程度になりました。<br>
順位にすると4096個のサンプル中正例はおよそ600位になり、今回のタスクが難しかったことを感じました。</p>
<h3>Subsampling</h3>
<p>バッチは、ユーザーを10歳毎にセグメントし、同じ年代かつ同じ日のターゲットがバッチ内に入るようにし、時系列にそって訓練しました。<br>
訓練時は10歳ごとにセグメントしたグループの直近一週間でのarticleの出現上位10000位から最大4096個をサンプリングしました。<br>
サンプリングは<a href="https://stackoverflow.com/questions/43310075/sampling-without-replacement-from-a-given-non-uniform-distribution-in-tensorflow" target="_blank">Gumbel-max trick</a>を用いました。</p>
<p>推論時は直近7日間で誰かしらの購入履歴がある、直近90日で良く出現した10000個のアイテムに対して、コサイン類似度を総当たりで計算しました。(ScaNNのような近似近傍探索も試しましたが、時間的な影響はごくわずかでした)</p>
<h2>Ranking part</h2>
<p>ランキングではTransformerを用いランク学習を行いました。</p>
<h3>Model</h3>
<p>推薦対称のarticleをQuery、ユーザーの閲覧履歴をKey & ValueとしたTransformerを一層用いました。ここでもself-attention型のEncoderや、二層以上のDecoderは不要でした。<br>
これによって得たEmbeddingと、それまでに計算した中間層の結果をDCNV2に入れて、二値分類を行いました。</p>
<h3>Loss</h3>
<p>lossは<a href="https://www.tensorflow.org/ranking/api_docs/python/tfr/keras/losses/ApproxNDCGLoss" target="_blank">ApproxNDCGLoss</a>を元に、順位ごとのペナルティを順位差にした独自のlossを用いました。<br>
ApproxNDCGLoss(original)、順位差の一乗、順位差の二乗を比較しましたが、順位差の一乗が最も良い結果でした。</p>
<h3>Subsampling</h3>
<p>訓練時はコサイン類似度の3乗の比を確率としてGumbel-max trickで負例を最大128個作成しました。これは上位をそのままとってくるよりも程よく弱い負例をとることができモデルに良いフィードバックを与えました。<br>
推論時はRetrievalパートの上位128件と過去の閲覧履歴について予測を行いました。</p>
<h1>Machine & Experiments</h1>
<p>私はcorei9-12900のCPU、64GBのRAM、RTX3090のGPUを用いました。<br>
そのうちCPUとRAMは半分程度の使用量で、全データを用いた学習は1epoch9時間、最高スコアは2epoch目で出ました。</p>
<p>余談ですが、notebookの実験番号は最後で159になり、非常に労力を費やしたなと感じています。</p>
<p>随時更新します。気軽に質問してください!</p>
<p>Thanks for your comment.<br>
It's difficult to publish everything because the code isn't managed properly and only I can understand it.<br>
However, if there is a part that you are interested in, cut out that part and publish it (if possible).</p> |
H&M Personalized Fashion Recommendations | 4th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: H&M Personalized Fashion Recommendations
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to H&M and Kaggle Team for organizing such a great recommendation competition.<br>
Thanks <a href="https://www.kaggle.com/paweljankiewicz" target="_blank">@paweljankiewicz</a>, <a href="https://www.kaggle.com/titericz" target="_blank">@titericz</a>, <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> and many Kagglers for their great discussions and notebooks.<br>
I hope to get a solo gold metal this time, so I decided to do this competition by myself in the beginning.</p>
<h1>Overview</h1>
<p>I followed the same pipeline as <a href="https://www.kaggle.com/paweljankiewicz" target="_blank">@paweljankiewicz</a> suggested in <a href="kaggle.com/competitions/h-and-m-personalized-fashion-recommendations/discussion/307288" target="_blank">Addressing common questions and what the competition is really about</a> . First, generate candidates using different recall models for each customers. Second, build a ranking model that ranks the candidates within customer.<br>
<a href="https://postimg.cc/PNQsRDhB" target="_blank"><img src="https://i.postimg.cc/CMWhdC2L/arch.png" alt="arch.png"></a></p>
<h1>CV setup</h1>
<p>I mainly followed <a href="https://www.kaggle.com/competitions/h-and-m-personalized-fashion-recommendations/discussion/308919" target="_blank">How To Setup Local CV</a>, I created 5 folds and used 3 folds as training data. I also tried more training data, but it didn’t increase the local CV and LB.</p>
<h1>Recall model</h1>
<p>I developed 4 recall models in total.</p>
<ul>
<li>Item2item CF: the relationship between the pair of items (the user who bought Y, also bought Z).</li>
<li>repurchase: latest 20 purchased products by each customers.</li>
<li>popular: popular ranking in last week.</li>
<li>Two Tower MMoE: I mainly focused on improving the performance of this model. Because it can generate candidates of arbitrary length for all users, and the user/item embeddings (similarity) can also be used as an important feature for ranking model.</li>
</ul>
<p>I used a gating network to make sure that the user tower can learn well by using different experts for recent active customers and non active customers.<br>
<a href="https://postimg.cc/phyh6v5S" target="_blank"><img src="https://i.postimg.cc/j54HCdxR/user-tower.png" alt="user-tower.png"></a><br>
<a href="https://postimg.cc/crGCQwh2" target="_blank"><img src="https://i.postimg.cc/RVNtxTHS/item-tower.png" alt="item-tower.png"></a><br>
<a href="https://postimg.cc/kRQnjYsZ" target="_blank"><img src="https://i.postimg.cc/wMP7jSm6/loss.png" alt="loss.png"></a></p>
<h1>Ranking model</h1>
<p>I mainly used lightgbm with lambda-rank objective. I also implement DCN model described in <a href="https://arxiv.org/abs/2008.13535" target="_blank">https://arxiv.org/abs/2008.13535</a>. But I didn’t have enough time to tune it. The score of lightgbm is much better.</p>
<h1>Cold-start users</h1>
<p>Two Tower MMoE can also generate candidates using user demographic features for customers without purchase log.</p>
<h1>Cold-start items</h1>
<p>Besides product meta features, I also extracted the following features from text and image.</p>
<ul>
<li>Text: Extract TF-IDF features from product description, cluster products using SVD + K-means. Then use cluster id as feature.</li>
<li>Image: Extract image vectors using pre-trained tf_efficientnet_b3_ns backbone, cluster products using PCA + K-means. Then use cluster id as feature.</li>
</ul>
<h1>Submission</h1>
<p>best local CV is 0.039, LB is 0.0349.<br>
My final submission combined my previous submissions by<br>
<a href="https://www.kaggle.com/code/titericz/h-m-ensembling-how-to" target="_blank">h-m-ensembling-how-to</a></p>
<h1>to be updated</h1>
<p>Thanks! But honestly it is really challenge to train a NN model with so many cold users.</p>
<p><a href="https://www.kaggle.com/biubiug" target="_blank">@biubiug</a> </p>
<blockquote>
<p>how is LB or oof score of this mmoe model?</p>
</blockquote>
<p>I didn't submit it. Actually I mainly focused on improving the recall of this model instead of mAP. I only calculated mAP@12 local as a reference, it is around 0.025 for last week.</p> |
H&M Personalized Fashion Recommendations | 52nd place with 20 minute Kaggle Notebook | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: H&M Personalized Fashion Recommendations
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks for the feedback, <a href="https://www.kaggle.com/hanejiyuto" target="_blank">@hanejiyuto</a>!</p>
<p>I wrote practically all of it for this competition.<br>
Although I do hope to reuse parts of it in the future.</p>
<p>Pleasure!</p>
<p>Yes, that's correct.</p>
<p>The reasoning is, we want to train the model to do well on the week of the LB (week 105 in my code) - for that week, we'll only have the candidates we generate.</p>
<p>You'll want to see <a href="https://www.kaggle.com/competitions/h-and-m-personalized-fashion-recommendations/discussion/324076" target="_blank">this comment</a> from <a href="https://www.kaggle.com/paweljankiewicz" target="_blank">@paweljankiewicz</a>.</p>
<h3>Update:</h3>
<p>By using cuML's FIL, notebook run time went down from 20 minutes to 12 minutes!</p>
<h3>I was able to generate my final submission - including candidate retrieval, feature engineering, and training/predicting with two models - in just 20 minutes.</h3>
<p>Partially, it was from using cudf for whatever I could, and it's really amazing how much time it saves.</p>
<p>But I think the main reason was that I used a light-weight candidate retrieval method that gave me a good result without needing many candidates per customer, and without needing collaborative filtering or image/text processing.</p>
<p>I had 33 million candidates (which comes out to only ~25 per customer on average).</p>
<p>I started with <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>'s notebooks - <a href="https://www.kaggle.com/code/cdeotte/customers-who-bought-this-frequently-buy-this" target="_blank">customers who bought this frequently bought this</a> and <a href="https://www.kaggle.com/code/cdeotte/recommend-items-purchased-together-0-021" target="_blank">Recommend Items Purchased Together</a>.</p>
<p>I refactored his first notebook in <a href="https://www.kaggle.com/code/jacob34/cdeotte-pairs-in-3-minutes" target="_blank">this notebook</a>, so that I could get the same result in 3 minutes.</p>
<p>Then I made several changes:</p>
<ol>
<li>I had it only look for "pairs" from articles that were sold in the past week</li>
<li>I had it find 5 "pairs" for each article, instead of just 1</li>
<li>I ignored any article that wasn't sold within the past week</li>
<li>I ignored any "pair" purchased by less than 2 customers</li>
</ol>
<p>We get a very strong retrieval method, a sort of manual, time-aware collaborative filtering method - what customers with similar purchase interests were purchased <em>in the past week</em> - so it includes trend information as well.</p>
<p>For the second part, I followed his method of recommending "pairs" of items a customer had purchased in their last few weeks of history.</p>
<p>And I also kept features to feed to LGBMRanker about:</p>
<ul>
<li>the strength of the "match" (i.e. how many/what percentage of customers the "pair" was based on)</li>
<li>the "source" of the pair (i.e. how recently the original article was purchased, how many times it was purchased)</li>
</ul>
<p>Of course, I also generated the regular, standard candidates: past purchases, and 12 most popular items (I generated those based on age group).</p>
<p>I think this retrieval method was the main reason I was able to get a good result with relatively few candidates/resources.</p>
<h3>Update - code</h3>
<p>Some people asked for it, so I made a clean notebook (moved nearly all code to the repo).<br>
Here's <a href="https://www.kaggle.com/jacob34/clear-n-simple-final-shared-notebook" target="_blank">the notebook</a>, here's <a href="https://github.com/JacobCP/kaggle-handm-helpers" target="_blank">the repo</a>, and here's the <a href="https://www.kaggle.com/datasets/jacob34/handmhelpers" target="_blank">kaggle dataset</a> that syncs with the repo.</p>
<p>The notebook runs in 20 minutes, and gets the same private LB score (it varies a bit per run).</p>
<p>My workflow was:</p>
<ol>
<li>I'd work with a function in the notebook (for example, a candidate generator)</li>
<li>Once the function was working, I'd move it to the repo and point the notebook there.</li>
<li>Any parameters of the function (for example, candidate threshold), I'd add to my <code>params</code> dictionary in the notebook.</li>
<li>Each function had a <code>**kwargs</code> argument, so I could pass the same single <code>params</code> dictionary to every function, and the function would use the arguments it needed.</li>
<li>If I needed to revisit a function, or work with the values generated in middle, I'd move the function/s back to the notebook. </li>
</ol>
<h3>Update #2 - cuML's FIL</h3>
<p>Afterwards, I refactored my code to use cuML's FIL for inference, and that brought notebook time down from 20 minutes to just 12 minutes!</p>
<p>Thanks, <a href="https://www.kaggle.com/xfffrank" target="_blank">@xfffrank</a>!</p>
<p>I've updated my post with links to a clean notebook and my repo/dataset.</p>
<p>Thanks <a href="https://www.kaggle.com/marcogorelli" target="_blank">@marcogorelli</a>!</p>
<p>I've cleaned up my notebook for you, and have updated my post with links to the code/repo/dataset.<br>
Questions welcome!</p>
<p>Thanks!<br>
Congratulations to you on your solo silver!</p>
<p>Thanks <a href="https://www.kaggle.com/hervind" target="_blank">@hervind</a>!</p>
<p>I took inspiration from your notebook on the ability to speed things up with proper code design.</p>
<p>Thanks, Chris, means a lot to me!</p>
<p>Thanks for coming out of retirement for this 😉.</p>
<p>More seriously, I was happy you were taking part in the competition, and was hoping to continue to benefit from following your sharing, especially with your recent wins in recommendation comps, but I guess you didn't end up having the time for it.</p> |
H&M Personalized Fashion Recommendations | 5th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: H&M Personalized Fashion Recommendations
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>No, for the U2I and I2I I control the data usage very well. </p>
<p>Never heard of "item2vec" before, but it seems a promising approach. I just use gensim training a word2vec model with the article_id sequence to get the article_ids' embeddings. </p>
<p>Thank you for sharing this. Sounds great. I will definitely try it next time.</p>
<p>Imagine you have candidates A1,A2 for customer C1,C2,C3 on observation datetime D1, you could first calculate features for articles A1,A2,A3 and the article feature part for pairs (C1, A1), (C2, A1), (C3, A1) will be constant although the customers are different for those pairs, which will save quite an amount of calculation time. </p>
<p>You are welcome! You also did well. Congrats!</p>
<p>First of all, thanks to the competition organizers to host such an interesting competition and my great teammates <a href="https://www.kaggle.com/rendongltt" target="_blank">@rendongltt</a> (who is a huge fan of online-shopping, which gives us a lot of help😂) and <a href="https://www.kaggle.com/jiaqizhang35" target="_blank">@jiaqizhang35</a> (who is my friend, without whom I didn't even join this competition). And also I would like to thank <a href="https://www.kaggle.com/paweljankiewicz" target="_blank">@paweljankiewicz</a>, without his great posts, we will not reach this far. And also you (and also a lot of other nice guys, who love sharing) motivated me to do more sharing with the community.</p>
<p><strong>Overview</strong><br>
Overall we love this competition so much because it did provide infinite ways to solve this problem especially the possible recall methods. Talking to feature engineering, even though we're quite confident about our skills, but we may have no shot to get a gold medal compared to other grandmasters. So we focused ourselves on the recall methods from the very start. And our recall methods is deeply dependent on article embeddings (image, txt, word2vec, tfidf) and the amazing python package faiss to find the similar articles and customers. As to the feature engineering, we didn't spend too much time on it but using different days window to create aggreated stat features of articles and customers seperatly and customer-article cross aggreated stat features and different cosine similarity feaures using embeddings aforementioned. </p>
<p><strong>Recall methods</strong><br>
The recall methods comprised of those parts:</p>
<ol>
<li>Customer's last buckets(and same product code to customers' last bucket's articles), recent bought articles</li>
<li>U2I: User based collaborative filtering. </li>
<li>I2i: Item based collaborative filtering</li>
<li>word2vec: to predict customer's next possible purchase articles after last buckets</li>
<li>Articles bought together. I modified this nice <a href="https://www.kaggle.com/code/titericz/article-id-pairs-in-3s-using-cudf" target="_blank">notebook</a> from <a href="https://www.kaggle.com/titericz" target="_blank">@titericz</a></li>
<li>Popular articles with different customers' attributes like age bin, gender etc.<br>
So overall we use 21 recall methods to create article candidates.</li>
</ol>
<p><strong>Embedding methods</strong><br>
We created article embdding using different data and methods</p>
<ol>
<li>For article images, we use swin transformer to extract the embedding features.</li>
<li>For the article texts, first we concatnate all text values of each article and then I use SentenceTransformer to extract the embeddings</li>
<li>Using this great <a href="https://www.kaggle.com/code/aerdem4/h-m-rapids-article2vec" target="_blank">notebook</a> from <a href="https://www.kaggle.com/aerdem4" target="_blank">@aerdem4</a> you can easily get article embeddings using tfidf method</li>
<li>The last but also the most useful method to get article embeddings are word2vec method, for which we use gensim to ease the process.</li>
</ol>
<p><strong>Recall method calibration</strong><br>
Because different recall methods have different positive ratio, so it's better to use some method to calibrate all the recall methods. I use the method which I shared <a href="https://www.kaggle.com/competitions/h-and-m-personalized-fashion-recommendations/discussion/321672#:~:text=I%20would%20sugguest,goes%20like%20this." target="_blank">here</a> to do the preprocess.</p>
<p><strong>Feature engineering</strong><br>
Nothing special about this part but I found some method which will hugely accelerate the computing time for feature computing. <br>
Basicly we divide the features into 3 parts:</p>
<ol>
<li>articles features, which will be constant to all customers once you fixed the overvation time</li>
<li>similarly, customer features will be contant to all candidate articles once you fixed the overvation date</li>
<li>customer-article features, like cosine similarity between each pair of customer and article, customers' bought history aggreated features of this particular article.<br>
For part 1 and 2, you could do the computation and save them once and merged with the features of part 3 later. This method really save me a lot of time especially during inference time.</li>
</ol>
<p><strong>Modeling</strong> <br>
Because we spend most of the time to devolop recall strategies, we use only lightgbm ranker model for training and don't even have time to try other methods like catboost and xgboost. So basically our score is single model's score.</p>
<p><strong>Remaining question</strong><br>
We have one question that we still couldn't solve it right now, which is about the word2vec model. We use all transaction data to train the word2vec model, which actually leaked the future info a little bit. <br>
Imagne that for observation date 20200915 you compute the customer-article cosine similarity using the word2vec model trained using all transaction data. The word2vec embedings of two article ids aid_1, aid_2, which maybe mainly appeared duing week 20200916-20200922, are used on the observation date 20200915. <br>
The effect is everytime I use those embeddings to create candidates or compute features, the local CV will improve a lot while the LB improves only a little bit. For my final ranking model, my map@12 score for last week is 0.0441, but the LB is only 0.0350, the gap is much larger than the ones mentioned by others. But if we don't use this feature, both CV and LB score are dropping. So we don't really know how to do it to make the most use of this method. What do you think?</p>
<p>Glad to hear that!<br>
So you have a lot of candidate articles for each customer. And after you get the embedding of each article, you could compute the customer's embedding represention based on the customer's transaction history(aggregate the articles' embeddings to compute the mean/median embedding). Then you have the embedding of the customer and the embedding of the candidate article, you could calculate the vector cosine distance (also cosine similarity) of the customer and the article using dot(vec1, vec2) / (norm(vec1) * norm(vec2)), which represent how close between the customer and the article.</p>
<p>Glad to hear that. And you also did well! Congrats!</p>
<p>Got it. Thanks! Then you must train a lot of word2vec models.</p>
<p>Yes, I just wonder how other teams deal with this problem to make the CV-LB gap much smaller than mine.</p>
<p>How "lucky" you are… 😂 Anyway you did really well and all the best staying at home. </p>
<p>Yes, those features help a lot. And I don't think it's meaningless. We all know every customer has his/her own preference for styles(articles). And if our representation embeddings of articles are not bad, they have the ability to distinguish the styles. For example, if our embedding is 1-dimensional vector, a set of articles' embedding have values like [0.89], [0.90], [0.902]… Calculating average of those embeddings, you get a value around [0.9]. For another style set of articles, if one customer frequently bought a lot, after aggregating the embeddings of those articles, you get the mean embedding vector [0.2]. Well now we know where the separating capability of this method comes from.<br>
It may lose some info, but the noise info is also filtered and we get the main taste of the customer.</p>
<p>For word2vec, I mean using customers' time-ordered purchase article sequence to train a word2vec model getting the embeddings of articles.</p>
<p>Thanks for sharing your method. Guess I was too lazy to even think about training more than one word2vec model to avoid the leak. And also congrats to your solo 3rd place finish. Your score improving progress during last weeks are quite impressive!</p> |
H&M Personalized Fashion Recommendations | 63th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: H&M Personalized Fashion Recommendations
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><a href="https://www.kaggle.com/satoshidatamoto" target="_blank">@satoshidatamoto</a> <br>
Thank you for your celebrating!</p>
<h1>63th Place Solution</h1>
<p>Thanks to my teammates <a href="https://www.kaggle.com/ootake" target="_blank">@ootake</a>, <a href="https://www.kaggle.com/ryoichi0917" target="_blank">@ryoichi0917</a>, <a href="https://www.kaggle.com/xia464" target="_blank">@xia464</a>, especially <a href="https://www.kaggle.com/tyonemoto" target="_blank">@tyonemoto</a> for inviting me.</p>
<h2>Code (updated in 2022-07-12)</h2>
<p>I publish the entire code.<br>
code: <a href="https://github.com/Y-Haneji/kaggle-handm" target="_blank">https://github.com/Y-Haneji/kaggle-handm</a></p>
<h2>Last Submission</h2>
<ul>
<li>(lr=0.1) : Private LB: 0.02813, Public LB: 0.02826, Validation: 0.033623</li>
<li>(lr=0.02): Private LB: 0.02772, Public LB: 0.0277, Validation: 0.033814</li>
</ul>
<h2>Summary</h2>
<p>I think that the key points of this competition are these.</p>
<ul>
<li>Stable validation.</li>
<li>Two stage: generating candidates by some strategies, then ranking them.</li>
<li>Better strategies and scores from them.</li>
<li>Using full information like not only transactions but also detail descriptions, images, and so on.</li>
</ul>
<h2>Ranking Model (me)</h2>
<h3>Model</h3>
<p>LightGBM lambdarank</p>
<h3>Dataset</h3>
<p>This competition gave no training datasets, only transactions datas and some metadatas. I needed to generate training datasets by myself. The dataset was like this.</p>
<ul>
<li>customer_id</li>
<li>article_id (candidate)</li>
<li>score from generative strategy (may not just one score)</li>
<li>flags wheter the candidate included each strategy</li>
<li>other features of customers</li>
<li>other features of articles</li>
<li>label (0 or 1)</li>
</ul>
<p>The features of customers and articles are purchase and sales of …</p>
<ul>
<li>each week</li>
<li>agregation (sum, mean, max, min) of all weeks</li>
<li>difference and division between each week and aggregation</li>
</ul>
<p>Label is whether the article was bought target_week.</p>
<p>The details of strategies to generate candidates is written in below.</p>
<h3>Training</h3>
<ul>
<li>5 target weeks, this means 5 folds training dataset.</li>
<li>For training, I only generated candidates for the customers who had some purchases in the target week.</li>
<li>Model Validation: NDCG@12. The dataset of next target week was validation dataset. The last target week's training dataset had no validation datasets.</li>
<li>Submission Validation: MAP@12 for last week's transactions.</li>
</ul>
<h3>Parameter</h3>
<ul>
<li>objective: lambdarank</li>
<li>lr: 0.1 or 0.02</li>
<li>feature_fraction: 0.8</li>
<li>baggin_fraction: 0.8</li>
<li>early_stopping: 10 or 50 rounds (depend on lr)</li>
</ul>
<p>Other parameters is default. I tried Optuna but it didn't work well for lambdarank because it attempted to LOWER the MAP.</p>
<h3>Prediction (Submission)</h3>
<ul>
<li>For prediction, I should generate candidates for all the customers, which causes a memory problem.</li>
<li>Split the customers into batches.</li>
<li>Ensemble 5 target week's models.</li>
<li>Generally take longer time than training</li>
<li>If validation was not good, I didn't compute submission.</li>
</ul>
<h2>Candidates Strategies</h2>
<h3>Worked (included last submission)</h3>
<ul>
<li>trending (original: <a href="https://www.kaggle.com/code/byfone/h-m-trending-products-weekly" target="_blank">here</a>)</li>
<li>recently purchased</li>
<li>bin popular</li>
<li>last week for each customer</li>
<li>also bought this (original: <a href="https://www.kaggle.com/code/cdeotte/customers-who-bought-this-frequently-buy-this" target="_blank">here</a>)</li>
<li>collabo</li>
<li>t-SNE KNN</li>
<li>Albert KNN</li>
<li>sequence Word2Vec</li>
</ul>
<h3>Not Worked</h3>
<ul>
<li>Bert KNN</li>
<li>VGG KNN</li>
<li>Gru4Rec, Bert4Rec (LSTM) (original: <a href="https://www.kaggle.com/code/astrung/lstm-model-with-item-infor-fix-missing-last-item" target="_blank">here</a>)</li>
<li>Implicit ALS</li>
</ul>
<h3>Details of Some Strategies</h3>
<p>TO BE UPDATED: Some strategies will be written in some days.</p>
<h4>bin popular (by me and by <a href="https://www.kaggle.com/tyonemoto" target="_blank">@tyonemoto</a>)</h4>
<p>To recommend popular item was effecitve especially for cold start customers. Different kind of customers should have different preference. So we split customers into some groups by their features, like age, sex, and so on.</p>
<p>In the experiment phase, we used age, price, and transaction channel for binning. The number of splitting is 10 for age, 20 for price, and 3 for channels. Recall of this strategy was 0.06750.</p>
<p>Because of the deadline, we have no choice but to adopt only age bins.</p>
<h4>collabo (by <a href="https://www.kaggle.com/tyonemoto" target="_blank">@tyonemoto</a>)</h4>
<p>The collaboration articles with Kangol was launched a few weeks ago from the submission week. We recommended popular collaboration articles to customers who had bought some collaboration articles. Recall of this strategy was 0.00094.</p>
<h4>t-SNE (by <a href="https://www.kaggle.com/tyonemoto" target="_blank">@tyonemoto</a>)</h4>
<p>The metadatas of articles from the file of articles.csv and embedding vectors of detail descriptions by tf-idf and bert are compressed to two dimensions by t-SNE. We used count encoding method to deal with the category metadatas columns.</p>
<h4>Albert (by <a href="https://www.kaggle.com/ryoichi0917" target="_blank">@ryoichi0917</a>)</h4>
<h4>sequence Word2Vec (by <a href="https://www.kaggle.com/tyonemoto" target="_blank">@tyonemoto</a>)</h4>
<h4>Bert (by <a href="https://www.kaggle.com/tyonemoto" target="_blank">@tyonemoto</a> and <a href="https://www.kaggle.com/ootake" target="_blank">@ootake</a>)</h4>
<p>We used bert pretrained model named "bert-uncased", "MRPC", and "SQuAD" to extract features from articles' detail description.</p>
<h4>VGG (by me)</h4>
<p>I extracted features from images using VGG because their vectors were relatively low dimensions, which can avoid a memory problem. But even they reached 10GB files (when compressed to .npy).<br>
VGG KNN doesn't worked, but I wonder if I could use other bigger models to extract features.</p>
<h4>Implicit ALS (by <a href="https://www.kaggle.com/ootake" target="_blank">@ootake</a>)</h4>
<h3>Comparision</h3>
<p>The true purpose of this competition is not to explore strategies which have better Precision and Recall, but to obtain better MAP@12 for submission, so I compared each strategy based on MAP@12.</p>
<table>
<thead>
<tr>
<th>Strategy</th>
<th>MAP@12 change</th>
</tr>
</thead>
<tbody>
<tr>
<td>trending</td>
<td>base lines</td>
</tr>
<tr>
<td>recently</td>
<td>base lines</td>
</tr>
<tr>
<td>popular</td>
<td>base lines</td>
</tr>
<tr>
<td>last week</td>
<td>-0.000131</td>
</tr>
<tr>
<td>t-SNE</td>
<td>+0.00053</td>
</tr>
<tr>
<td>umap</td>
<td>-0.0001</td>
</tr>
<tr>
<td>t-SNE & umap</td>
<td>-0.000898</td>
</tr>
<tr>
<td>also bought this</td>
<td>+0.000533</td>
</tr>
<tr>
<td>albert</td>
<td>+0.000241</td>
</tr>
<tr>
<td>collaboration</td>
<td>+0.000134</td>
</tr>
<tr>
<td>Bert</td>
<td>+0.000051 (didn't correlate with LB)</td>
</tr>
<tr>
<td>sequence Word2Vec w/ compression</td>
<td>+0.000367</td>
</tr>
<tr>
<td>bin popular</td>
<td>+0.00058</td>
</tr>
<tr>
<td>bert4rec</td>
<td>-0.00037</td>
</tr>
<tr>
<td>gru4rec</td>
<td>-0.000347</td>
</tr>
<tr>
<td>Implicit ALS</td>
<td>+0.000142</td>
</tr>
<tr>
<td>sequence Word2Vec</td>
<td>-0.000563</td>
</tr>
<tr>
<td>VGG</td>
<td>-0.000375</td>
</tr>
</tbody>
</table>
<p>Note: This is the PROCESS of adding or subtracting the strategies rather than the result of comparision experiment.</p>
<h2>Computing Resources</h2>
<ul>
<li>Untill 2 weeks from deadlines: MacBook Air (M1, 2020) (8 core CPU, 16 GM RAM, no GPU for deep learning).</li>
<li>Last 2 weeks: GCP instances to deal with more candidates. (48 core CPU, 384 GB RAM)</li>
</ul>
<h2>Some Story</h2>
<p>This is my first medal in Kaggle, so I'm very happy with my place! </p>
<p>I am new to a recommend problem, but my teammates helped me a lot, and <a href="https://www.kaggle.com/paweljankiewicz" target="_blank">@paweljankiewicz</a> and <a href="https://www.kaggle.com/lihaorocky" target="_blank">@lihaorocky</a> was so kind to answer my questions in discussion.</p> |
H&M Personalized Fashion Recommendations | 6th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: H&M Personalized Fashion Recommendations
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><a href="https://www.kaggle.com/cherrizhu" target="_blank">@cherrizhu</a> <br>
<code>docs = transactions.groupby(['customer_id'])['article_id'].apply(lambda x: ' '.join(list(x))).reset_index()</code><br>
<code>enc = tfidf()</code><br>
<code>enc.fit(docs)</code><br>
something like this.</p>
<p>First of all, thanks to competition organizers for hosting this interesting competition and my great teammates(@Giba, <a href="https://www.kaggle.com/qysx" target="_blank">@qysx</a>, <a href="https://www.kaggle.com/hyd" target="_blank">@hyd</a>). Secondly, thanks to the kaggle community, I have learned a lot from those great kernels and disscussions.</p>
<p>In this competition, we were asked to recommend products to users based on their historical purchasing behavior. To solve this problem, our solution splits into two parts: recall&rank. Good strategy for generating candidates and feature engineering are both important to win a good place.</p>
<h3>Recall</h3>
<p>We used several methods to generate different candidates, in order to improve the coverage of positive samples(both user and user-item).</p>
<ol>
<li>u2i: items that the user recently purchased</li>
<li>i2i: item based collaborative filtering</li>
<li>u2tag2i: tag can be "product_code", "product_type_no", "department_no", "section_no"</li>
<li>hot: generate hot items for different age_bins's user</li>
</ol>
<h3>Features</h3>
<p>Most important features are as follows,</p>
<ol>
<li>count: user, item, user-item based, it's important to use all historical data to generate the features of user.</li>
<li>gap: user, item, user-item first/last purchase time to now. For item's first purchased time, it may represent the "release time" of this item. So we can use "user" as key to calculate the statistics of "release time" , it may help us to get user's preference for new or old items.</li>
<li>items' discount: use max/mean to represent the common price of the item. use the price for each row to calculate the discount of the item.So we can get user's preference for discounted items and the number of days which a item is on sale recently.</li>
<li>tfidf: set user's historical purchased article/product_code/product_type_no to a sentence, then use tfidf+svd to generate feature. </li>
<li>item_sim: collaborative filtering score of i2i, calculate the items between the candidates and user's historical purchased.</li>
<li>set categorical_features("department_no", "product_type_no", etc.) for lightgbm will improve cv score about 0.0005~0.0008. </li>
</ol>
<h3>Model</h3>
<p>We use last 9 weeks for training(last week for validation).Our best single model is catboost with cv score(0.00403) and lb score(0.0341). We use several models to ensemble(cv: 0.0412, lb: 0.0348). And we use the kernels(<a href="url" target="_blank"></a><a href="https://www.kaggle.com/code/titericz/h-m-ensembling-how-to" target="_blank">https://www.kaggle.com/code/titericz/h-m-ensembling-how-to</a> )Giba shared before to handle the different candidates list when blending.</p>
<table>
<thead>
<tr>
<th>model</th>
<th>objective</th>
<th>average num of candidates for each user</th>
<th>cv map@12</th>
</tr>
</thead>
<tbody>
<tr>
<td>catboost</td>
<td>binary</td>
<td>120</td>
<td>0.0403</td>
</tr>
<tr>
<td>lightgbm</td>
<td>binary</td>
<td>120</td>
<td>0.0395</td>
</tr>
<tr>
<td>catboost</td>
<td>binary</td>
<td>220</td>
<td>0.0402</td>
</tr>
<tr>
<td>lightgbm</td>
<td>binary</td>
<td>220</td>
<td>0.0396</td>
</tr>
<tr>
<td>lightgbm</td>
<td>lambdarank</td>
<td>220</td>
<td>0.0400</td>
</tr>
<tr>
<td>lightgbm</td>
<td>lambdarank</td>
<td>1000</td>
<td>0.0381</td>
</tr>
</tbody>
</table>
<p>The result of our best cv is the best one on PB. Trust your local cv is really importance in kaggle.</p>
<h3>Giba's solution</h3>
<p>Set around 10k article_id candidate list based in sales popularity of last 2 weeks. Build a binary dataset with all combinations of customer_id x 10k article_id . Build that huge dataset by batches and using cudf to speedup the entire process. Create a binary trainset from weeks 90 to 104. Train model on week 90-103, validate on week 104. To train the model select random 200, 300 or 500 negative candidates for each customer_id. Blend models trained using LightGBM lambdarank, LightGBM xendcg, XGBoost lambdarank and CatBoost ranker. Best features based in the probability of purchase each article in a given period range and time passed since last purchase.<br>
More details on <a href="https://www.kaggle.com/competitions/h-and-m-personalized-fashion-recommendations/discussion/324278" target="_blank">Giba Solution</a>.</p>
<h3>To be continued</h3>
<p><a href="https://www.kaggle.com/souamesannis" target="_blank">@souamesannis</a> Target y represents whether the user purchase the item(candidates) or not in the week which need to predict.</p> |
H&M Personalized Fashion Recommendations | 8th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: H&M Personalized Fashion Recommendations
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks! Yes, you're right.</p>
<p>Thanks! I would have liked to embed customer purchase history in LSTM, etc.</p>
<p>Thanks! By chance my first name is Kazuki and team <code>Kazuki**2+shimacos*1e-3</code> has two members named Kazuki. I don't know if <code>Kazuki**2-aragaki</code> has the name Kazuki.</p>
<p>Yes, I remove overlap candidates for each customer.</p>
<pre><code>1 - What you consider "most popular"? Is the most purchased items?
In this candidate strategy you used "sales in the last week" as ranking method.
For instance, for target_week = 105, most popular would be this as follows?
# if using 12 candidates for this strategy
train[train['week'] == 104]['article_id'].value_counts().index.tolist()[:12]
</code></pre>
<p>Yes, you're right.</p>
<blockquote>
<p>2 - You used 6 strategies. When you say "I select top 100 candidates for all candidate type." you mean you ended with 600 candidates (100 for each candidate strategy) or you mean 100 candidates after binding them all?</p>
</blockquote>
<p>600 candidates is correct. The actual number of candidates will be much smaller 600 because of the overlap between strategies.</p>
<p>For example, if customer X purchased three items A,B,C, I generated candidates for customer X as follows.</p>
<ol>
<li>extract three customer groups A,B,C that has purchased each item</li>
<li>calculate sales in the last 1 week for each customer group A,B,C</li>
<li>simply add sales A,B and C, and select top 100 as candidates</li>
</ol>
<p>Does this answer your question?</p>
<p>Thanks! <br>
My user base cf strategy is very simple. For each item, extract a group of customers who purchased target item and calculate last 1 week group trend. Sum group trends for each customer according to their purchase history.</p>
<p>Thanks for your comment!</p>
<blockquote>
<p>If I am reading your post right, if I were to generate candidates for each customer from only items they purchased before, and rank them by purchased date, that would give me a cv of 0.029? </p>
</blockquote>
<p>Yes, you're right.</p>
<blockquote>
<p>In this approach, what did you do for customers that didn't have any purchases before, or had less than 12 purchases?</p>
</blockquote>
<p>Such customers have no prediction or less than 12 prediction. If we fill in blank predictions with most popular candidates, cv score is up.</p>
<p>I created “product code statistics” features as follows.</p>
<ul>
<li>number of product code multi-purchased / number of product code purchased( for each customer id )</li>
<li>number of candidate item's product code purchased ( for each customer id )</li>
<li>mean number of candidate item's product code purchased ( for each product code )</li>
</ul>
<p>First of all, Thanks to Kaggle and H&M staff for hosting this interesting competition!<br>
I am very happy to get solo gold medal and Kaggle Master!<br>
I'll share my solution.</p>
<h2>Overview</h2>
<p>The basic strategy is the same as the other top winners, with the following three steps.</p>
<ol>
<li>generate item candidates for each customers</li>
<li>create features</li>
<li>learning to rank with LGBMRanker</li>
</ol>
<h2>Candidates</h2>
<p>I select top 100 candidates for all candidate type.</p>
<table>
<thead>
<tr>
<th>candidate type</th>
<th>ranking method</th>
<th>cv</th>
</tr>
</thead>
<tbody>
<tr>
<td>repurchased</td>
<td>purchase date</td>
<td>0.029</td>
</tr>
<tr>
<td>same product code</td>
<td>purchase date</td>
<td>0.014</td>
</tr>
<tr>
<td>user based cf</td>
<td>sales in the last week</td>
<td>0.024</td>
</tr>
<tr>
<td>most popular</td>
<td>sales in the last week</td>
<td>0.017</td>
</tr>
<tr>
<td>most popular categorized by age</td>
<td>sales in the last week</td>
<td>0.018</td>
</tr>
<tr>
<td>most popular categorized by sales channel id</td>
<td>sales in the last week</td>
<td>0.016</td>
</tr>
</tbody>
</table>
<h2>Features</h2>
<h4>Customer features</h4>
<p>I created various customer features, but did not work.</p>
<h4>Article features</h4>
<ul>
<li>article attributes ( except for article_id, product_cd ,causing overfitting )</li>
<li>sales in the last N days or N weeks</li>
<li>mean sales channel id</li>
<li>release date</li>
<li>mean price</li>
<li>repurchase statistics</li>
<li>product code statistics</li>
<li>rank in the candidate type</li>
</ul>
<h4>Customer x Article features</h4>
<ul>
<li>number of purchases with the same attributes as the candidate item</li>
<li>purchase date with the same attributes as the candidate item</li>
<li>difference between mean customer price and mean article price</li>
<li>score by user based cf</li>
</ul>
<h2>Model</h2>
<p>I user LGBMRanker model and train model per CV.<br>
I make weighted ensemble of model predictions per CV</p>
<ul>
<li>valid set : 98 -104 week ( 7 folds )</li>
<li>training set : last 3 weeks from valid set</li>
</ul> |
H&M Personalized Fashion Recommendations | 9th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: H&M Personalized Fashion Recommendations
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><blockquote>
<p>Does it mean that if get 50 positive candidates and 150 negative candidates , label 50 all positive as 1 and 75 negative as 0?</p>
</blockquote>
<p>Yes, do this properly for each customer because the number of candidates for each customer is different. There may be a better method but I have no time to focus on this.</p>
<blockquote>
<p>Could you share your code on github ?</p>
</blockquote>
<p>I have no plan to release my messy code so far. I may try to clean my code and add some comments if I have time to do that.</p>
<blockquote>
<p>Each event type can be assigned a different weight, for example, we might assign a “buy” event a weight of 10, while a “view” event might only have a weight of 1.</p>
</blockquote>
<p>As shown in MS official example, the event weight shows the importance of this interaction. Thus, I give the same value to every behavior since all of them are "buy".</p>
<blockquote>
<p>Could you share the sorted importance of features used in lightgbm?</p>
</blockquote>
<p>As is expected, the most important features in top 10 are related to repurchase such as how many days since the customer last bought this item/product and how many times this customer bought this item/product. </p>
<blockquote>
<p>How long is the embedding dimension?</p>
</blockquote>
<p>I have tried sizes 32 and 64 and the performance of size 64 is slightly better than size 32 on my local CV. But I finally use size 32 considering the limitation of my hardware.</p>
<p>It's too long since I last see the competition about recommendations on Kaggle. So great thanks to the Kaggle team and H&M for hosting such an interesting competition. The high correlation between CV and LB, big training data for deep data mining, no leaking of test information, friendly staff and participants, all of these make the successful competition.</p>
<h1>Summary</h1>
<p>The key idea of my solution will be illustrated here for fast reading purposes.</p>
<h5>Validation strategy</h5>
<pre><code>train = transactions.loc[transactions["t_dat"] < "2020-09-16"]
valid = transactions.loc[transactions["t_dat"] >= "2020-09-16"]
</code></pre>
<p>It is better to use k-fold validation in <a href="https://www.kaggle.com/competitions/h-and-m-personalized-fashion-recommendations/discussion/308919" target="_blank">this post</a> to avoid overfitting the last week if you have enough computing power.</p>
<p>But from my experience in this competition, LB score and CV score are highly correlated using last week only as a validation set.</p>
<h5>Recall</h5>
<table>
<thead>
<tr>
<th>Recall method</th>
<th>link</th>
</tr>
</thead>
<tbody>
<tr>
<td>SAR</td>
<td><a href="https://github.com/microsoft/recommenders/blob/main/examples/00_quick_start/sar_movielens.ipynb" target="_blank">notebook</a></td>
</tr>
<tr>
<td>Trending</td>
<td><a href="https://www.kaggle.com/code/byfone/h-m-trending-products-weekly" target="_blank">notebook</a></td>
</tr>
<tr>
<td>Most popular items based on age</td>
<td>/</td>
</tr>
</tbody>
</table>
<h5>Ranking</h5>
<p>LightGBM trained with 20 weeks of data using around 300 features</p>
<h5>Ensemble</h5>
<p>I have no time to train all 5 models with k fold split on the train set, only 3 of them are completed before the deadline. So my final submission is based on 1 model trained on all data samples and the other 3 models trained based on k fold split mentioned before (4 models in total).</p>
<table>
<thead>
<tr>
<th>Model</th>
<th>public score</th>
<th>private score</th>
</tr>
</thead>
<tbody>
<tr>
<td>one model</td>
<td>0.0338</td>
<td>0.0341</td>
</tr>
<tr>
<td>ensemble</td>
<td>0.0343</td>
<td>0.0346</td>
</tr>
</tbody>
</table>
<h5>Optimization</h5>
<p>Using cudf and Forest Inference Library from Rapids for fast feature generation and fast tree inference</p>
<h5>Hardware</h5>
<table>
<thead>
<tr>
<th>Hardware</th>
<th>value</th>
</tr>
</thead>
<tbody>
<tr>
<td>CPU</td>
<td>32core</td>
</tr>
<tr>
<td>Memory</td>
<td>128G</td>
</tr>
<tr>
<td>GPU</td>
<td>V100 32G</td>
</tr>
</tbody>
</table>
<h1>Data Analysis</h1>
<ol>
<li>We can find 90%+ items in the validation set also in the last 30 days</li>
</ol>
<pre><code>train_condition = (trans["t_dat"] >= "2020-08-16") & (trans["t_dat"] < "2020-09-16")
items_in_valid = trans.loc[trans["t_dat"] >= "2020-09-16", "article_id"].unique()
items_last_month = trans.loc[train_condition, "article_id"].unique()
print(len(set(items_in_valid) & set(items_last_month)) / len(set(items_in_valid)))
# 0.92
</code></pre>
<ol>
<li>Approximately 50% of customers in the validation set have transaction records in the last 30 days</li>
</ol>
<pre><code>train_condition = (trans["t_dat"] >= "2020-08-16") & (trans["t_dat"] < "2020-09-16")
users_in_valid = trans.loc[trans["t_dat"] >= "2020-09-16", "customer_id"].unique()
users_last_month = trans.loc[train_condition, "customer_id"].unique()
print(len(set(users_in_valid) & set(users_last_month)) / len(set(users_in_valid)))
# 0.47
</code></pre>
<p>Considering these results, we can focus on recommending items bought in the last 30 days, especially for those customers who have transaction records.</p>
<h1>Recall</h1>
<p>Only mapk of each recall method on the local validation set is recorded. Most of these ideas are coming from public notebooks or discussions and I will give the link if so.</p>
<h5>Most popular items last 7 days</h5>
<p>Simply sorting items by transaction records. This is also the basic rule for those customers has no age data</p>
<table>
<thead>
<tr>
<th>Recall method</th>
<th>mapk</th>
</tr>
</thead>
<tbody>
<tr>
<td>Top 12 items</td>
<td>0.00814889</td>
</tr>
</tbody>
</table>
<h5>Most popular itmes last 7 days by age</h5>
<p>We are doing personalized recommendations, so it's better to use some customers' features based on the rule for recall. Unfortunately, only 2 features seem valuable to me at first glance, age, and postal code. It's impossible to get the location feature from the postal code since it is encoded. Finally, the only choice is age.</p>
<pre><code>bins = [0, 18, 22, 28, 35, 45, 55, 65, 200]
last_week_transactions["age_bin"] = pd.cut(user["age"], bins=bins)
last_week_transactions_popular = last_week_transactions.groupby("age_bin").size()
</code></pre>
<p>Bins are simply tuned with last week's data. The mapk score is slightly higher than only using the most popular items.</p>
<table>
<thead>
<tr>
<th>Recall method</th>
<th>mapk</th>
</tr>
</thead>
<tbody>
<tr>
<td>Top 12 items</td>
<td>0.0081488</td>
</tr>
<tr>
<td>Top 12 items by age</td>
<td>0.0091866</td>
</tr>
</tbody>
</table>
<h5>Consider items that customers have bought before</h5>
<p>The first method is just adding items that customers have bought in the last 4 weeks. Similar to <a href="https://www.kaggle.com/code/hengzheng/time-is-our-best-friend-v2" target="_blank">this notebook</a></p>
<table>
<thead>
<tr>
<th>Recall method</th>
<th>mapk</th>
</tr>
</thead>
<tbody>
<tr>
<td>Top 12 items</td>
<td>0.0081488</td>
</tr>
<tr>
<td>Top 12 items by age</td>
<td>0.0091866</td>
</tr>
<tr>
<td>bought last 4 week +Top 12 items by age</td>
<td>0.0245722</td>
</tr>
</tbody>
</table>
<h5>Use Trending</h5>
<p>There is a significant increase in mapk. But since we know only half of the customers have transaction history in the last 30 days, we can do better. According to <a href="https://www.kaggle.com/code/byfone/h-m-trending-products-weekly" target="_blank">this notebook</a>, we have a wonderful method to compute the score of each item to all customers without the limitation of 30 days.</p>
<table>
<thead>
<tr>
<th>Recall method</th>
<th>mapk</th>
</tr>
</thead>
<tbody>
<tr>
<td>Top 12 items</td>
<td>0.0081488</td>
</tr>
<tr>
<td>Top 12 items by age</td>
<td>0.0091866</td>
</tr>
<tr>
<td>bought last 4 week +Top 12 items by age</td>
<td>0.0245722</td>
</tr>
<tr>
<td>Trending + Top 12 items by age</td>
<td>0.0255676</td>
</tr>
</tbody>
</table>
<h5>Recommend with SAR</h5>
<p>SAR is a kind of item-based collaborative filtering. As mentioned in <a href="https://www.kaggle.com/competitions/h-and-m-personalized-fashion-recommendations/discussion/316205" target="_blank">this post</a>, we can get 0.27+ with SAR. Here is the <a href="https://github.com/microsoft/recommenders/blob/main/examples/00_quick_start/sar_movielens.ipynb" target="_blank">example</a> from the Microsoft official repo. My SAR is trained with the last 30 days. Adding more data will make the performance worse in my case.</p>
<table>
<thead>
<tr>
<th>experiment</th>
<th>mapk</th>
</tr>
</thead>
<tbody>
<tr>
<td>Top 12 items</td>
<td>0.0081488</td>
</tr>
<tr>
<td>Top 12 items by age</td>
<td>0.0091866</td>
</tr>
<tr>
<td>bought last 4 week +Top 12 items by age</td>
<td>0.0245722</td>
</tr>
<tr>
<td>Trending + Top 12 items by age</td>
<td>0.0255676</td>
</tr>
<tr>
<td>SAR + Trending + Top 12 items by age</td>
<td>0.0278576</td>
</tr>
</tbody>
</table>
<h5>Overall recall strategy</h5>
<p>Let's say we are going to recommend K items for each customer. My Final strategy for the recall is</p>
<ol>
<li>Recommend with SAR if the customer has a transaction in the last 30 days</li>
<li>If the number of candidates is less than K, pick items from the trending method (remove items that have already been in the first step)</li>
<li>If the number of candidates is still less than K, fill with most popular items by age (using most popular items overall if the age data is missing)</li>
<li>Each recall method has a score and we will use them as features in the ranking phase.</li>
</ol>
<h1>Ranking</h1>
<p>LightGBM classifier is used as a ranking model. There is no significant difference between classifier and lambda ranker. But from my previous experience, the classifier will give a slightly higher performance in general. So I haven't tried lambda ranker in this competition, but it's a good way for ensembling.</p>
<h5>Aggregation features</h5>
<p>Nothing special here, just group by user/item, (user, item) and do some aggregation.</p>
<table>
<thead>
<tr>
<th>Feature</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>user feature</td>
<td>average prices of items this customer has bought in the last 7/30/90 days</td>
</tr>
<tr>
<td>item feature</td>
<td>average age of customers who have bought this item in the last 7/30/90 days</td>
</tr>
<tr>
<td>time</td>
<td>how many days after the customer's last transaction record</td>
</tr>
<tr>
<td>interaction</td>
<td>whether the customer has bought the current item before</td>
</tr>
<tr>
<td>others</td>
<td>…</td>
</tr>
</tbody>
</table>
<h5>Embedding feature</h5>
<p>We can treat the transaction history of a customer as a sentence, then use some NLP technique to pretrain the embedding of customer/item. My solution using word2vec to train the embedding and simply averaging the item embedding of a customer transaction history as one of the customer features (be care of leaking problems, do not use items in the current week)</p>
<h5>CV vs LB</h5>
<p>train with the last 20 weeks and generate 100 candidates for each customer (a bug was fixed in the submission phase several hours before the deadline, so the gap should be smaller)</p>
<table>
<thead>
<tr>
<th>validation</th>
<th>score</th>
</tr>
</thead>
<tbody>
<tr>
<td>local cv</td>
<td>0.0363</td>
</tr>
<tr>
<td>public score</td>
<td>0.0310</td>
</tr>
<tr>
<td>private score</td>
<td>0.0312</td>
</tr>
</tbody>
</table>
<h1>Optimization</h1>
<p>Feature engineering with cudf is 30X faster than using pandas in my case. It allows me to generate more candidates and get the feature in the submission phase. Load LightGBM model with Forest Inference Library and inference with GPU is 40X faster than using 32 core CPUs.</p>
<h1>Final Train & Inference</h1>
<ul>
<li>Train</li>
</ul>
<p>Generate 200 candidates for each customer, keeping all positive samples and random choosing half of the negative samples in the last 20 weeks</p>
<ul>
<li>Inference</li>
</ul>
<p>Generate 400 candidates for each customer, making prediction scores with 4 LightGBM models. Averaging them as the final score and getting top12 items</p>
<p>Thank you for pointing out the performance of SAR in the early stage, this method not only generates good candidates but also outputs the score which can be used as a golden feature for my ranking model. Hope you can get a better result in the next RecSys competition!</p>
<p>There is a threshold for each recall method tuned with my local validation set. If the prediction score of SAR/trending method is less than a value (tuned locally), these items will be removed from my candidates. So actually SAR won't recall K items for each customer especially for those with few transactions (score will be lower).</p> |
H&M Personalized Fashion Recommendations | A few notes... | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: H&M Personalized Fashion Recommendations
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First, thanks to Kaggle and H&M for providing a very interesting and well-designed challenge. I can't think of another recent competition where the winning solutions employed such a variety of different techniques. Also thanks to the Kaggle community for sharing insights, especially <a href="https://www.kaggle.com/paweljankiewicz" target="_blank">@paweljankiewicz</a> for providing the basic structure that I think just about everyone used. And congratulations to the winners!</p>
<p>I don't have much experience with Recommendation systems, so this competition gave me the opportunity to learn about a lot of different techniques. There's no point in detailing my solution since the winning solutions are superior, but here are a few things I discovered that may be of use in future projects:</p>
<ol>
<li><p>I experimented with several Python CF packages. In the end I used Implicit (<a href="https://github.com/benfred/implicit)" target="_blank">https://github.com/benfred/implicit)</a>, which gives good results and is faster than some of the others. cmfrec (<a href="https://github.com/david-cortes/cmfrec" target="_blank">https://github.com/david-cortes/cmfrec</a>) is also good. One of the nice things about the Implict package is that in has a built-in topn estimator, which most of the other packages lack. (One note about the Implicit package: the current Kaggle environment has an old version that is missing some key functionality.) </p></li>
<li><p>For ranking, I used LightGBM and experimented with regression, logistic regression, and lambdarank (actually rank_xendcg). Surprisingly, I got the best results from simple regression. In theory, lambdarank should work much better but I couldn't get it to perform well. I'm wondering if there's something in the hyperparamer settings that I missed?</p></li>
<li><p>My candidate retrieval algorithms generated an average of about 130 candidates per customer, but for training the ranking model, I sampled the negatives. In my case, the best ratio of positives-to-negatives was 1-7.</p></li>
<li><p>I think most of us had the experience of a major improvement in our CV score, but no improvement on the LB. One thing I noticed is that the gap between my CV and LB gradually expanded, but also became more consistent, as my scores improved. I think others have noted this as well. I wonder what would cause this effect?</p></li>
</ol>
<p>My final submission was:</p>
<p>CV: 0.03572<br>
LB: 0.03277<br>
PB: 0.03336</p> |
H&M Personalized Fashion Recommendations | Part of 22nd solution - single LGBM (Private:0.03038) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: H&M Personalized Fashion Recommendations
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you for your comments!<br>
In my case, as below.<br>
Classifier-> CV:0.03559 Public:0.0303<br>
Ranker->CV:0.03497 Public: not submitted<br>
I did not submit ranker because it takes tooo long time to make a submission file…</p>
<p>Thank you for your comment!<br>
I used public notebook's articles2vec<br>
<a href="https://www.kaggle.com/code/aerdem4/h-m-rapids-article2vec" target="_blank">h-m-rapids-article2vec</a></p>
<p>I'd like to thank this great competition's hosts and congrats winners and all medalists!</p>
<p>Super thanks <a href="https://www.kaggle.com/myaun" target="_blank">@myaun</a> <a href="https://www.kaggle.com/NARI" target="_blank">@NARI</a> <a href="https://www.kaggle.com/Moro" target="_blank">@Moro</a> <a href="https://www.kaggle.com/minguin" target="_blank">@minguin</a> for giving great ideas and solution as a team.</p>
<p>In this post, I would show you my part of solution.</p>
<p>I think my solution is quite simple but takes time to train and infer.</p>
<p>My solution is published as Kaggle notebook.<br>
If you like it ,please post comments and upvote. <br>
<a href="https://www.kaggle.com/code/iwatatakuya/22nd-place-lgbm-model-single-train" target="_blank">22nd-place-lgbm-model-single-train</a><br>
<a href="https://www.kaggle.com/code/iwatatakuya/22nd-place-lgbm-model-single-infer" target="_blank">22nd-place-lgbm-model-single-infer</a></p>
<ol>
<li><p>environment and validation strategy<br>
I used mainly Google Colab Pro+ because I need GPU (cudf, cuml) for fast training and inference with heavy data processing.<br>
It takes about 15 hours to train and 7 hours to infer (for validation data) and 100 hours to infer (for test data) with 1 GPU.<br>
Google Colab Pro+'s multi-session made it 2 times faster but still it takes too much time.<br>
I used 2020/9/16~9/22 data for validation and used ~2020/9/15 for training.</p></li>
<li><p>Dataset for training<br>
Maybe this part is most characteristic in my solution.<br>
First, I made dataset of customers who bought some items in 2020/9/9~9/15, 9/2~9/8, 8/25~9/1 and 8/18~8/24.<br>
The primary key are customer_id and article_id and each pair has target column with 1 (bought) or 0 (did not buy).<br>
We cannot use all articles which customer did not buy because they are too much quantity, so I randomly selected 30 articles as negative sample for each customer.<br>
I think features I made are not so special such as n_buy for customer and article.<br>
Please see my notebook for detail.</p></li>
<li><p>Training<br>
I used LGBM Classifier (not Ranker).<br>
In my case, Classifier was better than Ranker.</p></li>
<li><p>Inference<br>
I scored each pair of customer and item and sorted to predict items which the customer seems to buy.<br>
Candidate items are top sales 15,000 items in 2020/8/18~2020/9/15.<br>
Candidate items are same among customers.<br>
Such large number of candidates makes inference slower but it effects score significantly.</p></li>
</ol>
<p>Thank you for reading this far.<br>
Questions are welcome!</p> |
Halite by Two Sigma | 17th Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Halite by Two Sigma
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hello everyone and congrats for all participants!!</p>
<p>We want to share our team solution which is placed 17th. Our team "team-tokai" consists of minaminao( <a href="https://www.kaggle.com/mnmnao" target="_blank">@mnmnao</a> ), yasagure( <a href="https://www.kaggle.com/yasagure" target="_blank">@yasagure</a> ), and Johannes( <a href="https://www.kaggle.com/johannesmdr" target="_blank">@johannesmdr</a> ).</p>
<h2>Table of Contents</h2>
<ul>
<li><a href="#approaches">Approaches</a></li>
<li><a href="#our-strategy">Our Strategy</a><ul>
<li><a href="#1-100-steps-collect-halite-and-make-ships-aggressively">1-100 steps: Collect halite and make ships aggressively</a></li>
<li><a href="#101-300-steps-attack-opponents-do-not-collect-too-many-halite-make-ships">101-300 steps: Attack opponents, do not collect too many halite, make ships</a></li>
<li><a href="#301-400-steps-mining-halite-aggressively-not-making-ships">301-400 steps: Mining halite aggressively, not making ships</a></li>
<li><a href="#minimum-cost-flow">Minimum cost flow</a><ul>
<li><a href="#optimal-matching-of-ships-and-target-cells">Optimal matching of ships and target cells</a></li>
<li><a href="#optimal-matching-of-ships-and-next-cells">Optimal matching of ships and next cells</a></li></ul></li>
<li><a href="#prevent-our-bases-from-destroyed">Prevent our bases from destroyed</a></li>
<li><a href="#set-up-bases-to-increase-the-area-of-our-position">Set up bases to increase the area of our position</a></li></ul></li>
<li><a href="#afterword">Afterword</a></li>
<li><a href="#code">Code</a></li>
<li><a href="#Japanese-version">Japanese version</a></li>
</ul>
<h2>Approaches</h2>
<p>There are two types of algorithmic approaches: using/not using reinforcement learning. We knew that reinforcement learning is not suitable for some games, so early in the competition, we spent a lot of time "understanding the nature of Halite games" and "systematizing the cases where reinforcement learning works" to determine if reinforcement learning makes sense as a problem solver.</p>
<p>We concluded that it was going to be a big swamp to work on reinforcement learning, and we chose not to use reinforcement learning. The main reasons were "the size of the player's action space" and "the difficulty of seeking <em>global</em> optimization". For example, with 30 ships, there are 6^30 action patterns per turn; even in Go, where AlphaGo is active, there are only a few hundred options per turn at most.</p>
<p>To solve this large action space, it seems reasonable to take a ship-by-ship reinforcement learning approach. However, while we may be able to seek a per-ship optimum, it is difficult to seek a holistically optimal action that takes into account all ships, leading us to believe that considerable effort would be required to go beyond a non-reinforcement learning approach.</p>
<p>On the other hand, in the last Halite competition (with slightly different rules than this one), some teams had done well in reinforcement learning, so we decided not to abandon reinforcement learning altogether, and our team member Johannes decided to continue to stick with it from the middle of the competition onwards.</p>
<h2>Our Strategy</h2>
<p>There are six key points in our strategy.</p>
<ol>
<li>Aggressively mine halite until 100 steps. If we can make ships, make them.</li>
<li>From 101 to 300 steps, attack opponents and do not collect too many halites. If we can make ships, make them.</li>
<li>From 301 steps onwards, we should collect halites aggressively. Do not make ships.</li>
<li>Solve for two stages of matching, "optimal matching between ship and target cell" and "optimal matching between ship and next turn cell", with minimum cost flow.</li>
<li>Locate the ship at the base and do not let the base be destroyed.</li>
<li>Set up bases to increase the area of the position.</li>
</ol>
<h3>1-100 steps: Collect halite and make ships aggressively</h3>
<p>Like many of the top teams, we generated ships as soon as possible. The earlier you generate ships, the better, and as we will see below, the more ships you have, the better the advantage you have when attacking our opponents. Therefore, storing halite at the beginning of the game is a bad idea, and it is important to create more ships in the beginning of the game while collecting halite efficiently.</p>
<h3>101-300 steps: Attack opponents, do not collect too many halite, make ships</h3>
<p>Around 100th step, the halite is nearly exhausted. This is the so-called "tragedy of the common ground". We will also have about 20 ships (if each players' abilities are balanced).</p>
<p>From here, the strategy of destroying our opponents' ships is advantageous, as there is little to be gained by mining halite. Generating a ship need 500 halite, so destroying a ship will cost the attacking player about 500 halite. We attacked an opponent's ship with maximum of four ships, as attacking with multiple ships is much more likely to get the enemy ship out of position than attacking with a single ship.</p>
<p>It is also advantageous to avoid mining halites as much as possible, because if we collect halites, they will become a target of attack. If we collect them in a cell far away from the base, they have a higher chance of being destroyed. We avoided collecting halites in cells at a certain distance from the base.</p>
<p>To avoid mining halite has another purpose: regenerating the amount of halite. Mining halite in moderation prevents halite depletion.</p>
<h3>301-400 steps: Mining halite aggressively, not making ships</h3>
<p>By step 300, the number of opponents' ships will be decreasing. Also, by attacking and not mining too much halite around our base, the halite will have been stored in place. We mined these halite all at once.</p>
<p>Ship creation is stopped. Even if you create a new ship, it is difficult to collect 500 halite from that ship.</p>
<h3>Minimum cost flow</h3>
<p>We resolved matching problems by using minimum cost flow. This is a familiar mathematical optimization problem in competition programming.<br>
We have used this algorithm in two stages as follows.</p>
<h4>Optimal matching of ships and target cells</h4>
<p>Mining halite should be considered several turns ahead of arriving at the target cell. Since each ship has a different distance to each cell and a different amount of halite in its possession, optimal matching of ships and target cells is important. Depending on the matching, we can solve following all bad moves at once.</p>
<ul>
<li>Send multiple ships to the same cell.</li>
<li>There are ships that arrive earlier.</li>
<li>Collide with our own ships.</li>
</ul>
<p>The latter two are particularly difficult to deal with the method of determining the target cell by turning a loop for each ship.</p>
<p>Each ship requires each cell to be evaluated as follows:</p>
<ol>
<li><p>In the event of a collision, our ship will be defeated and the cell with the enemy ship and the four squares around it will be forbidden to move. Using a width-first search to find the shortest path for each cell.</p></li>
<li><p>Using the shortest path, find out for each cell "how many turns it is necessary to stay in the cell and return to the nearest base in order to maximize the average amount of halite acquired in one turn". Here, we also take into account returning to the base.</p>
<ul>
<li>Same technique as in the notebook <a href="https://www.kaggle.com/solverworld/optimus-mine-agent" target="_blank">Optimus Mine Agent</a>.</li>
<li>However, there is room for improvement because this logic does not take into account mining halite from more than one cell.</li>
<li>From 101 to 300 steps, if there are enemy ships in the cell, the halite held by the enemy ships is added to the evaluation.</li></ul></li>
<li><p>Take that average amount of halite gained as the evaluation value for each cell.</p></li>
</ol>
<p>The minimum cost flow based on this evaluation gives the overall optimal ship-target cell combination.</p>
<h4>Optimal matching of ships and next cells</h4>
<p>After finding the target cell, we want to find the next cell to move for each ship (next cell). We considered the cell you are in and 4 cells around you. Find the shortest distance to get to the target cell, and the smaller the distance, the better the evaluation value. However, a cell that can be moved in a single turn by an enemy ship that could be destroyed if our ships collide with it makes for an extremely poor rating.</p>
<p>The minimum cost flow based on this evaluation gives the overall optimal ship/next-turn cell combination. Due to the way the flow edges are stretched, our ships will not collide with each other.</p>
<h3>Prevent our bases from destroyed</h3>
<p>Some opponents attacked our bases. If a base is destroyed, it will cause a lot of damage for two reasons.</p>
<ol>
<li>To create a base, you need to generate a ship from our base and convert it from a ship to a base, which requires 1000 halite in total.</li>
<li>We are making all sorts of logics on the basis of having bases.</li>
</ol>
<p>We solved this problem by placing ships on top of the base at all times.</p>
<h3>Set up bases to increase the area of our position</h3>
<p>In the middle steps of the game, we must prevent regenerating halite from mined by enemy ships. Even if an enemy ship enters our position and collects our halite, it will be easier to surround and destroy the enemy ship.</p>
<h2>Afterword</h2>
<p>We also implemented a few other minor algorithms, but we won't go into those. Early in the contest, we introduced an algorithm that uses a two-step minimum cost flow. We think this algorithm was easy to combine with other logic and worked well until the end. We regret that we did not apply reinforcement learning well!</p>
<h2>Code</h2>
<p><a href="https://gist.github.com/minaminao/cbe4a3d514ea9ae924cb5d59181d78cc" target="_blank">Here</a> is our final code. There are some Japanese comments;)</p>
<h2>Japanese version</h2>
<p><a href="https://minaminao.com/halite-by-two-sigma/" target="_blank">Here</a> is our solution in Japanese!</p> |
Halite by Two Sigma | Post-mortem for HuiUnited's Halite IV bot (26th rank) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Halite by Two Sigma
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>The code for my highest score bot is <a href="https://github.com/arpakornb/Halite4" target="_blank">here</a>.</p>
<p>First of all, I would like to thank Two Sigma and Kaggle for organizing such a fun competition. I'm also indebted to the notebook from <a href="https://www.kaggle.com/yegorbiryukov/halite-swarm-intelligence" target="_blank">Yegor Biryukov</a> and <a href="https://www.kaggle.com/alexisbcook/getting-started-with-halite" target="_blank">Alexis Cook</a> for getting me started.</p>
<p>Initially I did this only as a way to learn python and start participating in kaggle competition. My background was in C programming for networking equipment with no ML experience.</p>
<p>I took a direct (and you might say, naive) approach: dig, steer, deposit, with no attempt at hunting. Each ship scans its neighborhood up to four steps away from it to find the highest halite cell (with weight to prefer nearer cells) and goes there. Neighboring ships don't go to cells that have already been chosen. This seems good enough that I stick with it til the end. I augmented it (last week of competition) to improve the beginning phase (up to ~step 20) by choosing the top 9 halite cells in my quadrant (except two steps around yard) as targets for the first 9 ships. I control how much each ship digs by a maximum constant that varies based on game step and how aggressive other players are hunting.</p>
<p>Navigating crowded space seems very challenging. I try to solve this by assessing risk two steps around each ship at the beginning of each step. I assign varying risk values to my or enemy ship, its distance and carrying halite. I let my ships that might be attacked choose its move first followed by ships that have lower risk value (trying to make room for ships in risky position).</p>
<p>With all of the above, my submissions' score just would not go higher than 1100. It will do well up until around step 70-200 when not enough ships will survive the hunting onslaught. That's when I realized that I need to hunt as well so I experiment with the easiest approach. I stop collecting at around step 70 and every ship with zero halite will look for nearby enemy ships (three steps around) with the highest halite and step toward that ship. There is no coordination between ships whatsoever. This seems to raise my score into ~1200 level. I think it is not really the hunting that helps but mostly because I preserve enough ships to have a chance to collect again after step 300. I augmented this later by doing some digging if there's no enemy ship around and that helps a little bit.</p>
<p>I also improve the shipyard creation and placement and see a small improvement. More precision is needed there.</p>
<p>Other noteworthy improvements came from reacting to what other players do. rica_chan's bot taught me a lot. :) For example, its shipyard destruction near the end prompted me to tighten the ship and shipyard creation logic. I also copied mzotkiew's end game all-out deposit.</p>
<p>All in all, this has been good fun and a great experience. Looking forward to the next competition. I'm thankful for everyone who has shared their code. It is great learning resource. Really appreciate it!</p>
<p>Thank you very much. Congrats on your gold medal!</p> |
Halite by Two Sigma | Raine Force writeup (2nd place solution) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Halite by Two Sigma
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you !!</p>
<p>Wow thank you so much for giving me some intelligent analyses, and I repeatedly say congratulations!</p>
<p>Yes, I also observed during the last week that my agents had poor performance in the early game and the late game (including spawn strategy) but had a strong performance in the mid game, hunting season.<br>
I implemented some attack logic to 2nd/3rd place in RestrainShipyardProject but my best agent rejected the project. Then the target shipyard to attack in the last few steps is decided by distance only.<br>
I limited hunting a ship with at least two empty defenders because of trade risk, but the hunting may be still too aggressive.</p>
<p>Thank you!</p>
<p>UPDATE: I submitted a notebook that I created the tables: <a href="https://www.kaggle.com/viewlagoon/halite-iv-count-win-or-lose-for-each-team" target="_blank">https://www.kaggle.com/viewlagoon/halite-iv-count-win-or-lose-for-each-team</a></p>
<p>Thank you and congratulations! To evade / defend your aggressive shipyard attacking is my final task in the last two days.</p>
<p>Also congratulations on your unique blocking plantation strategy! You are the true winner against me in 1v1.</p>
<p>Thank you so much!</p>
<p>Thank you! In fact I submitted two changes. (1) keeping distance from opponent shipyards and (2) defensing shipyards by the default strategy instead of chasing. They worked well than what I expected.</p>
<p>That I evaded from imitation learning like you may encourage to climb up :p</p>
<p>Thank you :)</p>
<p>NOTE: Despite my team name "Raine Force" looks like using reinforcement learning, my agents are 100% rule based.</p>
<p>Hi! I really enjoyed the halite competition. Thank Kaggle and Two Sigma staffs. In the kaggle-environments, helpers.py is really helpful for me. Also, thank competitors. Various strategies are interesting to watch replays.<br>
Congratulations Tom! I think a reason that I reached such a great place is that my conversion strategy luckily had some amount of immunity against Tom's smart harass strategy. In fact, lose count of my best 5 agents is nearly equal to or greater than win count of them against best 5 agents of convexOptimization, Uninstall LoL, mzotkiew and KhaVo Dan Gilles Robga Tung.</p>
<p><img src="https://github.com/viewlagoon/HaliteRaineForce/blob/main/images/table_Raine_Force.PNG?raw=true" alt="Raine Force vs top teams"></p>
<p>However, win rate against Tom was different from top competitors (my best 5 agents got roughly 40% win rate against Tom's best 5 agents).</p>
<p><img src="https://github.com/viewlagoon/HaliteRaineForce/blob/main/images/table_Tom_Van_de_Wiele.PNG?raw=true" alt="Tom Van de Wiele vs top teams"></p>
<p>A notebook that I created the tables is <a href="https://www.kaggle.com/viewlagoon/halite-iv-count-win-or-lose-for-each-team" target="_blank">here</a></p>
<hr>
<p>Since other teams have already submitted good discussions, I mainly focus on Raine Force specific implementation in this discussion.</p>
<ul>
<li>Official rules are <a href="https://www.kaggle.com/c/halite/overview/halite-rules" target="_blank">here</a></li>
<li>Collections of Post-mortems & Bot Source Codes are <a href="https://www.kaggle.com/c/halite/discussion/186290" target="_blank">here</a> or <a href="https://www.kaggle.com/c/halite/discussion/184028" target="_blank">here</a></li>
</ul>
<hr>
<h2>Stateful, Long term Decision Making</h2>
<p>I call my high-level decision making AI "Projects".</p>
<ul>
<li>All projects firstly compute their priorities and sort by them, then run sequentially.</li>
<li>If the project is not suitable, it returns less than 0 priority.</li>
<li>The projects can hold ships and halite (as budget) for multiple steps to do consistent, long term decision making.<ul>
<li>However, some of the projects rob ships from the other projects is still open to handle flexible urgent situations.<ul>
<li>For example, HuntProject may rob ships from other projects.</li>
<li>Swapping ship roles in some situations are valuable to move to target positions faster. However, it's difficult to implement swap task without bugs. Therefore I implemented it only in some very limited situations.</li></ul></li></ul></li>
</ul>
<h3>Actual projects (roughly sorted by priority)</h3>
<p><img src="https://github.com/viewlagoon/HaliteRaineForce/blob/main/images/HuntProject_example.gif?raw=true" alt="HuntProject example"></p>
<ul>
<li>HuntProject (1 project per opponent ship)<ul>
<li>Do ally ships surround the opponent ship? (= Is the opponent ship boxed in by my ship candidates?)</li>
<li>It assigns 5 to 8 ships (1 or 2 ships for each direction) as many as possible to hunt stably.</li>
<li>It does nothing (= no ships join to it) when surrounding is incomplete (ex. only 3 directions / only 4 ships) or when coordination of another opponent is required.</li>
<li>If one of ally ships wants to stay but it stands on a halite ground and the opponent is not checkmate, it may back. Additionally, when the opponent ship's safe position is only one, they may use forward step prediction to move to a valid direction.</li></ul></li>
</ul>
<hr>
<p><img src="https://github.com/viewlagoon/HaliteRaineForce/blob/main/images/DefenseShipyardProject_example.gif?raw=true" alt="DefenseShipyardProject example"></p>
<ul>
<li>DefenseShipyardProject (1 project per ally shipyard)<ul>
<li>It assigns a ship which can reach the shipyard faster than any opponent.</li>
<li>The ship mines ground halite if there are free steps. It conflicts to MineProject. Usually DefenseShipyardProject has much priority than MineProject, then MineProject is ignored and DefenseShipyardProject looks at the MineProject's halite threshold.</li>
<li>The Target shipyard spawns if needed.</li>
<li>It detects camper from current round history, and punish it.</li></ul></li>
</ul>
<hr>
<p><img src="https://github.com/viewlagoon/HaliteRaineForce/blob/main/images/ExpeditionProject_example.gif?raw=true" alt="ExpeditionProject example"></p>
<ul>
<li>ExpeditionProject (1 project)<ul>
<li>It assigns a ship.</li>
<li>It decides whether converting another shipyard or not.</li>
<li>It searches suitable position to convert.</li>
<li>It reserves 1000 halite budget to spawn safely after convert.</li>
<li>See Aggressive Conversion Strategy section for more details.</li></ul></li>
</ul>
<hr>
<p><img src="https://github.com/viewlagoon/HaliteRaineForce/blob/main/images/EscortProject_and_MineProject_example.gif?raw=true" alt="EscortProject and MineProject example"></p>
<ul>
<li>EscortProject (1 project per ally ship)<ul>
<li>It assigns an empty ship as a defender.</li>
<li>Usually a target ship has halite, or is belongs to MineProject whose position is far from ally shipyards.</li>
<li>Escorted ship can also join this project. It means that the ship just wants to return to shipyard, and to prevent other projects robbing it.</li>
<li>In the late game, empty ship's target position is switched to a nearest opponent shipyard to attack.</li></ul></li>
<li>MineProject (1 project per ground cell with halite)<ul>
<li>It assigns a miner ship.</li>
<li>It computes halite threshold (= how amount halite should be mined?). It depends on current step and ally/opponent shipyard distance, or hunt zone (see Shipyard Location section below).</li>
<li>Risk management<ul>
<li>If the miner mines here right now, can opponents surround and hunt the miner?</li>
<li>Ignore cell at the neighbor of opponent shipyard.</li></ul></li>
<li>Note that it's far from optimal mining because multiple cells coordination is very limited. Just the projects try to evade to assign neighbor cells simultaneously unless ships are very near.</li>
<li>Discard projects when it cannot deposit to my shipyard in the late game (cell wise).</li></ul></li>
</ul>
<hr>
<p><img src="https://github.com/viewlagoon/HaliteRaineForce/blob/main/images/RestrainShipyardProject_example.gif?raw=true" alt="RestrainShipyardProject example"></p>
<ul>
<li>RestrainShipyardProject (1 project per opponent shipyard)<ul>
<li>1 or 2 ships harass to the shipyard.</li>
<li>Since camping gives 2 other opponents advantage, some submissions including best one don't have the project.</li></ul></li>
</ul>
<hr>
<ul>
<li>Default Strategy<ul>
<li>If a ship is not assigned by any project, a default strategy is just returning to shipyard and move around it to defend shipyards and plantations which grow halite by a regeneration rule (2% per step).</li></ul></li>
</ul>
<hr>
<h3>Debugging</h3>
<ul>
<li>Debugging stateful agents are difficult.</li>
<li>I kept stateless decision if possible. Otherwise, I reduced the number of states to minimum. The final states are few:<ul>
<li>Ship id in some projects</li>
<li>Halite budgets in ExpeditionProject and DefenseShipyardProject</li>
<li>Ships' defending directions in HuntProject</li></ul></li>
<li>I kept my codes replay reproductive<ul>
<li>When I found suspicious move at step t, I gave the replay step 0 to (t - 1) to reproduce it. Then modified my code and retried to proceed to (t - 1), then checked new Q value again.</li>
<li>Situations to use pure random value is very limited. Almost all to use pure random value is a game of chicken situation, that is, two player ships are located on cells nearby a fertil halite ground.</li></ul></li>
</ul>
<hr>
<h2>Aggressive Conversion Strategy</h2>
<h3>Motivation</h3>
<p>Although Tom evaluated Raine Force agent as "its highly aggressive base conversion logic", my motivation to convert as many as possible, is quite conservative.<br>
I respect to mzotkiew's hunting power and at the early stage of my agent coding (late August), I implemented HuntProject. The project was enough to hunt my miner even with an EscortProject defender, and I realized that it's often impossible or quite risky to carry halite for long distance. That's why my agents build shipyards at many places. I just want to deposit safely against offensive top competitors.<br>
Urgent conversion is also conservative. When a ship detects that it can survive with staying, but Q values of 4 directions are less than conversion threshold, it tries to convert. See also Decision Ordering section.<br>
The only thing which is truly aggressive conversion, is the second conversion. ExpeditionProject reserves 1000 halite budgets and converts an empty ship as fast as possible. Then I start with 7 ships and 2 shipyards while many other players start with 9 ships and 1 shipyard. I was not sure whether it works or not but replay logs indicates that it works well to dominate a center / edge / corner fertil position.<br>
Another thing my agent looks like too aggressive is not a conversion but a late game spawning (It looks like Robiland). My spawning strategy evaluates number of ships, remaining steps and world ground halite. In fact, the origin of the late game spawning is not my intend (i.e. bug). Top players grow their halite well, and are careful to mine against powerful opponent attackers nearby, then the late game world halite are increased. My agents are confused that it's time to spawn. However, after my agents built many ships and shipyards, my ships attack and mine to their plantation and sometimes success to rob their fertil grounds. It sometimes leads to dominate the game completely and then other competitors may be impressed that Raine Force is too aggressive.</p>
<p><img src="https://github.com/viewlagoon/HaliteRaineForce/blob/main/images/Aggressive_spawning_successed.PNG?raw=true" alt="Aggressive spawning could be successed"><br>
<img src="https://github.com/viewlagoon/HaliteRaineForce/blob/main/images/Aggressive_spawning_failed.PNG?raw=true" alt="Aggressive spawning could be failed"></p>
<hr>
<h3>Shipyard Location</h3>
<p>Many players imitated Fei Wang's triangle to grow internal ground halite and hunt mining opponents. I did it too in some submissions. However, in the other submissions including the best one, I prefer just to locate two shipyards diagonally. It is enough to grow and hunt in the internal rectangle/square because spawning from two diagonal shipyards can supply HuntProject candidates for all directions easily. I called the internal rectangle a hunt zone for me and a danger zone for opponents.</p>
<p><img src="https://github.com/viewlagoon/HaliteRaineForce/blob/main/images/hunt_zone_example.gif?raw=true" alt="Hunt in hunt zone"></p>
<p>ExpeditionProject calculates shipyard candidate score for every cell (more precisely, BFS from ship candidates of ExpeditionProject). Basic scoring is based on:</p>
<ul>
<li>surrounding halite</li>
<li>increasing hunt zone</li>
<li>decreasing danger zone</li>
<li>distance between the other shipyards</li>
<li>distance between the location and ship candidates<ul>
<li>can safely reach? (when ExpeditionProject assigns non empty ship)</li>
<li>decaying score for far locations</li></ul></li>
</ul>
<p>and executing threshold in ExpeditionProject is based on:</p>
<ul>
<li>number of remaining steps</li>
<li>world ground halite / number of all player ships</li>
<li>number of ships<ul>
<li>Parameters are different among submissions, but I describe the parameters of my best agent here.</li>
<li>Basically the threshold is 1 shipyard per 7 ships.<ul>
<li>For example, if number of ships are greater than 14, then ExpeditionProject tries to convert 3rd shipyard.</li></ul></li>
<li>If world ground halite are fertil, the threshold is 1 shipyard per 6 ships.</li>
<li>If number of ships is top, the threshold is 1 shipyard per 4 ships.</li></ul></li>
</ul>
<p>However after I submitted at 9/13 and checked replay logs, I was firstly noticed that most of top players loved to attack an opponent shipyard nearby their shipyard (while Leukocyte loved to attack shipyards anywhere), in spite of giving advantage to two other opponents. Then the last two days I removed preference of decreasing danger zone and keep distance from opponent shipyards. Since urgent conversion is still remained outside of ExpeditionProject, sometimes my ship still converts in an opponent triangle and start war anyway.</p>
<hr>
<h2>Escape Strategy</h2>
<p>Two simple principles are enough to understand my escape strategy when there is no safe position:</p>
<p><img src="https://github.com/viewlagoon/HaliteRaineForce/blob/main/images/escape_strategy_example_1.gif?raw=true" alt="Escape Strategy"></p>
<ul>
<li>Go to positions which empty ships of two different players can reach.<ul>
<li>To give opponents risks to cancel trade.</li>
<li>One of the ship may be an ally. Typically it's an EscortProject defender.</li></ul></li>
</ul>
<hr>
<p><img src="https://github.com/viewlagoon/HaliteRaineForce/blob/main/images/escape_strategy_example_2.gif?raw=true" alt="Escape Strategy"></p>
<ul>
<li>Go to positions with an empty opponent and ground halite<ul>
<li>Typical attackers evade to mine except checkmate situations.</li></ul></li>
</ul>
<hr>
<h2>Decision Ordering</h2>
<ol>
<li>All ships calculate their Q-values by projects or a default strategy.</li>
<li>All ships calculate their priority by their cargo, number of surrounding ships and a parent ship (described below).</li>
<li>Sort ships by the priorities.</li>
<li>Resolve to move sequentially.</li>
</ol>
<ul>
<li>I prepared two types of Q-values.<ul>
<li>shape == (5,). default. mine / north / east / south / west<ul>
<li>choose argmax direction.</li>
<li>positions which the other ships determined to move, are masked.<ul>
<li>I have map size flags to record next moving positions by determined ships.</li>
<li>if optional argument "forced" is True, the ship can move to the position (for the last several steps).</li></ul></li></ul></li>
<li>shape == (5, 5).<ul>
<li>Ships can set a parent ship.</li>
<li>Used Q values are depend on parent ship's moving direction.<ul>
<li>For example, q[1, 4] is Q value that the parent ship moved to north and the target ship moves to west.</li></ul></li>
<li>In the priority calculation phase, if parent.priority < child.priority, parent's priority is increased. It can propagate to parent's parent, parent's parent's parent, …</li>
<li>It is basically used by EscortProject. For example, a defender can set a miner as a parent ship.<ul>
<li>When projects decide ships' Q values, an actual miner movement cannot be decided. Therefore I prepared (5, 5) shape for the defender.</li></ul></li></ul></li></ul></li>
<li>The Q values also compared to convert threshold which depends on the ship's cargo and values described at ExpeditionProject.<ul>
<li>If 4 moving Q values are less than convert threshold or masked, the ship tries to convert.</li>
<li>When conversion is worthless, convert threshold is -infinity. It means that ships can select low Q value move such as move to empty opponent.</li></ul></li>
</ul>
<hr>
<h2>Local Evaluation</h2>
<p>Someone might have a question about local evaluation because I only submitted 17 times and 13 submissions are submitted during last 3 days (9/13, 14, 15).<br>
Sadly, I have no impressive way. I implemented a simple system which reorder seats for 24 patterns, count win=1, lose=-1 for 6 pairs, continue playing. That's all. I always picked 4 agents by myself. I didn't consume time to build powerful matchmaking system. It is absolutely my weakness.<br>
I prepared HuntProject and RestrainShipyardProject and it's eventually roughly enough to simulate uncomfortable opponent attacks except shipyard attacking… I implemented shipyard attacking in RestrainShipyardProject at 9/14.<br>
My submitted agents still remain weaknesses. Especially I had a lot of TODO works in end game specializations such as leaving shipyard and no conversion without shipyard (escape only).</p>
<hr>
<h2>Source Codes</h2>
<p>My best agent is on <a href="https://github.com/viewlagoon/HaliteRaineForce" target="_blank">GitHub</a></p>
<hr>
<h2>Conclusion</h2>
<p>I developed 100% rule based agents which have:</p>
<ul>
<li>Projects to give roles and to do long term decision</li>
<li>Aggressive conversion strategy to deposit halite safely and to create hunt zone</li>
<li>Decision ordering system which uses Q values and priorities to evade collision</li>
</ul>
<p>Thank you for reading!</p> |
Halite by Two Sigma | Team 677's Post-Mortem Writeup (29th Place) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Halite by Two Sigma
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hey everyone! Our team has finally finished our post-mortem writeup. You can check it out on Github <a href="https://github.com/josgoo/Halite" target="_blank">here</a>. We spent some time discussing the bot implementation, but we also devoted a section to the things we learned about approaching these kinds of coding competitions from the start---this was our team's first ever competition, so we hope our takeaways can help advise others getting started for the first time!</p>
<p>Thanks again to Two Sigma and Kaggle for hosting such an awesome competition, and to our fellow competitors for their interesting writeups. We can't wait for the next one 😃</p> |
Happywhale - Whale and Dolphin Identification | #36 place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Happywhale - Whale and Dolphin Identification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>thank you Balaji, hope we can team up in the future.</p>
<p>here is a training notebook included 3 implementations, you can check the differences with yours.<br>
<a href="https://www.kaggle.com/code/liuzhangzhen/happywhale-bodydataset-training" target="_blank">https://www.kaggle.com/code/liuzhangzhen/happywhale-bodydataset-training</a></p>
<p>Congratulations to all the winners. Thanks Happywhale and Kaggle for hosting this competition.</p>
<p>Many thanks for the sharing of Lex Toumbourou and Jan bre, I started from their significant work of [Happywhale - Effnet B6 fork with Detic crop] and the dataset of backfintfrecords.<br>
Many experements in the competition had been tried and finally only three significant improvements in model.</p>
<p>1.<strong>Dynamic margins</strong>: publicLB +0.02 (Thanks Landmark team to share the solution, paper is here: <a href="https://arxiv.org/abs/2010.05350):" target="_blank">https://arxiv.org/abs/2010.05350):</a> <br>
Tested and found the best margins range is 0.05-0.6. but training became difficult( loss is easy to Nan) and need to reduce LR.<br>
2.<strong>Batch normalization</strong>: publicLB +0.02 (Thanks Heng, discussion is here: <a href="https://www.kaggle.com/competitions/happy-whale-and-dolphin/discussion/315129" target="_blank">https://www.kaggle.com/competitions/happy-whale-and-dolphin/discussion/315129</a> )<br>
add Batch normalization layer before dense layer, add L2 normalization for dense:<br>
3.<strong>FreezeBN</strong>: publicLB +0.01(Thanks Balaji, discussion is here: <a href="https://www.kaggle.com/competitions/happy-whale-and-dolphin/discussion/309582" target="_blank">https://www.kaggle.com/competitions/happy-whale-and-dolphin/discussion/309582</a> )<br>
it works for EfficientB6 and B5, but not works for B7(loss always to nan)</p>
<p>here is a training notebook included above 3 implementations<br>
<a href="https://www.kaggle.com/code/liuzhangzhen/happywhale-bodydataset-training" target="_blank">https://www.kaggle.com/code/liuzhangzhen/happywhale-bodydataset-training</a></p>
<p>finally, two ensembles help to improve public LB to 0.858<br>
Concatenate 7 model embeddings: improve from LB0.815 to 0.835(body dataset)<br>
Combine the two datasets(body and fin) test neighbor and take the max confidence: improve from LB0.835 to 0.858 </p>
<p>Here is my final solution:<br>
· Dataset: fin and body, use yolov5x6 to train 10folds, test by b0rev256 then pickup best 5 folds, choose the best 1 for train and all others for TTA.(Result: huge effort but slight improvement, maybe should be more carefully to check the dataset and remove noise)</p>
<p>· Split: random choose 500 2 pictures and 55 pictures as new whales(about 11% just same percentage as the LB) as validation dataset and use all others for train. (Result: validation dataset is proved to co-related with LB and useful)</p>
<p>· Pseudo: take 4000+ with confidence > 0.95 and all other targets confidence < 0.65</p>
<p>· Augmentation: base notebook + 50% rotation, shear, shift, zoom.(Result: slight improvement)</p>
<p>· Model: dynamic margins, batch normalization layer before dense layer, add L2 normalization for dense, FreezeBN(Result: significant improvements)<br>
· Model candidate: (final 7-8 models for fin and body)<br>
· Effv1: b7 768 Adam(use TPUv3-8), b7 608 Adam(Colab), b6 640 Adam(Colab), b5 640 Adam(Colab), b6 480 Adam(Colab), + some SGD model (ex: b7 640) </p>
<p>· Ensemble<br>
· TTA:5 fold embeddings and take the mean (small improvement)<br>
· Embedding concatenate for body and fin dataset(Large improvement)<br>
· Merge fin and body by max conf, leverage by the LB score(use confidence * (LB score**2))(Large improvement)</p> |
Happywhale - Whale and Dolphin Identification | 10th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Happywhale - Whale and Dolphin Identification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congratulations to all the winners. And thank you to kaggle host and all participants for this exciting competition. <br>
This is My First Solo Gold Medal. <br>
I'm very impressed to win the gold medal among so many kaggle GMs, masters. </p>
<p>Here is my 10th place solution.</p>
<h1>Dataset</h1>
<p>I used 2-type datasets and especially thank <a href="https://www.kaggle.com/jpbremer" target="_blank">@jpbremer</a>, because of his sharing valuable approaches and datasets. </p>
<ul>
<li>fullbody dataset <ul>
<li>public dataset by <a href="https://www.kaggle.com/jpbremer" target="_blank">@jpbremer</a></li>
<li>private dataset <ul>
<li>trained yolov5 and WBF with bboxes by Detic </li></ul></li></ul></li>
<li>backfin dataset <ul>
<li>public dataset by <a href="https://www.kaggle.com/jpbremer" target="_blank">@jpbremer</a> </li></ul></li>
</ul>
<h1>Model</h1>
<p>I trained 2-type(fullbody/backfin) models with image sizes 512 or 784.<br>
fullbody/backfin model: trained with fullbody/backfin dataset</p>
<p>When we search nearest neighbors, </p>
<ul>
<li><p>for the specices with backfin -> use concatenated embeddings by fullbody models and backfin models </p></li>
<li><p>for the specices without backfin (like Beluga, …) -> use embeddings by fullbody models</p>
<ul>
<li>backbone </li>
<li>efficientnetv2_m</li>
<li>efficientnetv2_l</li>
<li>convnext_base</li>
<li>convnext_large</li></ul></li>
</ul>
<p>In my experiments, EfficientnetV2 > EfficientnetV1 ≧ ConvNext, but ensembling them boosted my CV/LB scores. <br>
And I concatenated outputs of conv-layers before pooling layer, and then forward this to the neck of the model. <br>
This also works well. </p>
<h1>Augmentation</h1>
<p>The main augmentation is below. </p>
<ul>
<li>HorizontalFlip</li>
<li>ImageCompression</li>
<li>ShiftScaleRotate</li>
<li>RandomBrightnessContrast</li>
<li>HueSaturationValue</li>
<li>MotionBlur </li>
</ul>
<h3>Mixup</h3>
<ul>
<li>mixup the embeddings (not images) and Arcface with soft label worked (CV:+0.003-0.005)</li>
</ul>
<h1>Loss</h1>
<p>In my case, some aux-loss helped.<br>
Loss = ArcfaceLoss + FocalLoss + SpeciesLoss</p>
<ul>
<li>SpeciesLoss: classification of 26-species</li>
</ul>
<h3>progressive dynamic margins</h3>
<p>At some early epochs, training didn't go well if the margins are somewhat large. <br>
Inspired by <a href="https://arxiv.org/pdf/2010.05350.pdf" target="_blank">https://arxiv.org/pdf/2010.05350.pdf</a>, I re-designed the function to increase margins gradually. </p>
<ul>
<li>1~5 epoch: increase coefficient of margins linearly from 0.2 to 1</li>
<li>6~20 epoch: coefficient of margins = 1 (That is, this function is equal to original-dynamic margins) </li>
</ul>
<h1>Pseudo Label</h1>
<p>This is also the key to raise the scores. <br>
At first, I trained models on pseudo-label, <br>
and then trained on original training dataset using this pretrained weights.</p>
<h1>Ensemble</h1>
<p>I concatenate weighted embeddings, this operation works a little (CV:+0.001-0.002)</p>
<h1>Thresholding</h1>
<p>To predict <code>new_individual_id</code>, search the best threshold by the predicted top1-species on validation datasets. </p>
<p>Thank you for your reading.<br>
Any questions are welcome !</p>
<p>An example of my effv2 training config is below. <br>
I don't know if the params are useful for another task.</p>
<pre><code># margin setting
margin_config:
params:
dynamic_margin_max: 0.8
dynamic_margin_min: 0.2
margin_power: 0.25
prog_epoch_n: 3
# lr scheduler
lr_scheduler: cosine_schedule_with_warmup
lr_scheduler_params:
num_training_steps: 20
num_warmup_steps: 5
# optimizer
optimizer_config: AdamW
optimizer_params:
lr: 7.0e-4
</code></pre>
<p>I use all pseudo labeled data (i.e. all the test data) to increase data for pretraining. </p>
<p>Thank you for warmful message, Qishen ! <br>
I read your past GLR solutions again and again. </p>
<p>Sorry for late reply. I'm going to prepare pseudo code. </p>
<p>Yes, I used <code>n</code> (number of classes), and I implemented the margin function like below.</p>
<pre><code>f(epoch_coeff, n_class) = epoch_coeff * (a * n_class ^ -x + b)
epoch_coeff: linearly increased in first 5 epochs
</code></pre>
<p>Thank you ! </p>
<p>Thanks !! <br>
I'm looking forward to fight together someday again !</p>
<p>Here it is. <br>
You can mixup embeddings in the same way as the basic image base mixup, and forward mixed-embeddings using the loss below. </p>
<pre><code>class ArcFaceLossAdaptiveMarginMixup(nn.Module):
def __init__(self, margins, s=30.0):
super().__init__()
self.crit = CrossEntropyLossWithSoftlabel()
self.s = s
self.margins = margins
def forward(self, logits, labels, perm, coeffs):
"""
perm: permutated index in batch by using mixup
coeffs: soft-labels by using mixup
"""
ms = []
ms = self.margins[labels.cpu().numpy()]
cos_m = torch.from_numpy(np.cos(ms)).float().type_as(logits)
sin_m = torch.from_numpy(np.sin(ms)).float().type_as(logits)
th = torch.from_numpy(np.cos(math.pi - ms)).float().type_as(logits)
mm = torch.from_numpy(np.sin(math.pi - ms) * ms).float().type_as(logits)
perm_labels = labels[perm]
perm_ms = self.margins[perm_labels.cpu().numpy()]
perm_cos_m = torch.from_numpy(np.cos(perm_ms)).float().type_as(logits)
perm_sin_m = torch.from_numpy(np.sin(perm_ms)).float().type_as(logits)
perm_th = torch.from_numpy(np.cos(math.pi - perm_ms)).float().type_as(logits)
perm_mm = torch.from_numpy(np.sin(math.pi - perm_ms) * perm_ms).float().type_as(logits)
logits = logits.float()
cosine = logits
sine = torch.sqrt(1.0 - torch.pow(cosine, 2))
# original label
labels2 = torch.zeros_like(logits)
labels2.scatter_(1, labels.view(-1, 1).long(), 1)
phi = cosine * cos_m.view(-1, 1) - sine * sin_m.view(-1, 1)
phi = torch.where(cosine > th.view(-1, 1), phi, cosine - mm.view(-1, 1))
# perm label
perm_labels2 = torch.zeros_like(logits)
perm_labels2.scatter_(1, perm_labels.view(-1, 1).long(), 1)
# fix perm labels for not double-count the same labels
perm_labels2 = perm_labels2 - torch.logical_and(perm_labels2, labels2).int()
perm_phi = cosine * perm_cos_m.view(-1, 1) - sine * perm_sin_m.view(-1, 1)
perm_phi = torch.where(cosine > perm_th.view(-1, 1), perm_phi, cosine - perm_mm.view(-1, 1))
# get index with no label
with_no_label = 1 - (labels2 + perm_labels2 > 0).type_as(logits)
output = (labels2 * phi) + (perm_labels2 * perm_phi) + (with_no_label * cosine)
output *= self.s
loss = self.crit(output, labels, perm_labels, coeffs)
return loss
</code></pre> |
Happywhale - Whale and Dolphin Identification | 11th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Happywhale - Whale and Dolphin Identification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>We mainly use 512. bigger embeddings are better in CV but I cannot independently check LB in competition.</p>
<blockquote>
<p>100 folds, is that writing error?</p>
</blockquote>
<p>No. It's correct<br>
one model use only one fold.</p>
<p>For Example, </p>
<p>I trained EfficientNetB5 896(imsize)<br>
I use Training: 99 number of index, Validation: 1 index fold .</p>
<p>Next, I trained EfficientNetB6 768(imsize)<br>
I use Training: 99 number of index, Validation: 2 index fold.</p>
<blockquote>
<p>You are using different fold embedding concat, or I miss something?</p>
</blockquote>
<p>All models predict all training/testing</p>
<blockquote>
<p>Q3: It is interesting, can you explain more on that, thanks.</p>
</blockquote>
<p>Siamese Network trained pair is the same identity or not.<br>
We extracted Top20k similarity using embeddings. </p>
<p>Siamise Network ourput is here</p>
<p>image1.jpg, image2.jpg, 0.99(same confidence)<br>
image1.jpg, image3.jpg, 0.94<br>
image1.jpg, image4.jpg, 0.12</p>
<p>we convert it to a similarity matrix</p>
<p>we sum similarity matrix(embedding) + siamese network matrix</p>
<p>Thank you for hosts, also team members( <a href="https://www.kaggle.com/aerdem4" target="_blank">@aerdem4</a> <a href="https://www.kaggle.com/ren4yu" target="_blank">@ren4yu</a>)<br>
Competition is very tough, so we try to work harder every day, but it's very fun for us.<br>
I write a summary of our team solution</p>
<h1>Summary</h1>
<ul>
<li>Detection: YOLOV5 + WBF</li>
<li>EfficientNet B5/B6/B7/V2S/V2M/V2L/V2XL + Pseudo Labeling</li>
<li>Rescore Siamese Network</li>
</ul>
<h1>Detection Part</h1>
<h2>Dataset</h2>
<p>Firstly, We found small whale in images<br>
so we need to focus on whale for accurate identification.</p>
<p>I use labelimg for annotating whales. this annotation tool can export yolo format.<br>
<a href="https://github.com/tzutalin/labelImg" target="_blank">https://github.com/tzutalin/labelImg</a></p>
<p>Finally, we annotated 5800 images.</p>
<h2>Training</h2>
<ul>
<li>img size: 1280</li>
<li>YOLOV5x6</li>
<li>BS8</li>
<li>SyncBN</li>
<li>Epoch 20</li>
<li>6 Fold</li>
</ul>
<h2>Inference</h2>
<p>6fold models + WBF -> Filter top 1box</p>
<h1>Whale Identity Part</h1>
<h2>Dataset</h2>
<ul>
<li>Detection result</li>
</ul>
<h2>Training</h2>
<ul>
<li>EfficientNet B5/B6/B7/V2S/V2M/V2L/V2XL</li>
<li>ArcFace</li>
<li>Adam</li>
<li>Pseudo labeling(Multi-step, Threshold)</li>
<li>CosineAnnealingWarmRestarts</li>
<li>GPU/TPU</li>
</ul>
<p>ensemble using concat(32000dim). a single model is about 0.805.<br>
I split 100folds for training.</p>
<h2>Prediction</h2>
<p>We use cosine similarity for the identification of whales with some post-process.</p>
<h2>Rescore</h2>
<p>Before prediction, We use Siamese Network for top20<br>
We combine the similarity matrix and Siamese Network Score. it's a huge improvement in our score.<br>
It's achieved Public 0.881/Private 0.853</p>
<h1>we didn't work</h1>
<ul>
<li>DoLG</li>
<li>twice identity using Flip</li>
<li>Swin/ConvNeXt</li>
</ul>
<p>I have 128GB of memory, it can calculate once a time.<br>
but if you don't have a memory, you can split the data then you can calculate similarity.<br>
it indicates that you don't need to calculate at the same time.</p>
<p>Without(Public/Private)<br>
0.868<br>
0.837</p>
<p>I just add one Siamese Network<br>
0.874<br>
0.846</p>
<p>add some Siamese Network(4models)<br>
0.879<br>
0.851<br>
(Final is + hyperparameter tune)</p>
<p>without pseudo labeling, I reach 0.805.<br>
one of our advantage, yolov5 is better cropping</p>
<p>Thank you! </p>
<ol>
<li>RTX3090 x 2</li>
<li>my team member yu4u used TPU for training/ I used GPU for training.</li>
<li>we create many models(EfficientNetB5 + ArcFace etc…), about 20. we extracted vectors from each model, and finally, I concatenated all</li>
<li>I wrote in detail. I extracted top20 using cosine similarity(32000dims).<br>
because we cannot calculate all pairs using Siamese Network<br>
The Final is cosine similarity matrix + Siamese Network prediction(convert pair confidence to matrix)</li>
<li>I have 128GB memory machine. I can calculate once<br>
(it's already applied l2)</li>
</ol>
<p>S_all = V_tests_mat@V_trains_mat.T<br>
note: numpy float32</p> |
Happywhale - Whale and Dolphin Identification | 14th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Happywhale - Whale and Dolphin Identification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks a lot!</p>
<p>I'm glad to hear that! Your posts really helped me.</p>
<p>Thanks to the organizers for hosting the interesting competition and congratulations to all the winners. It was tough, but was a great experience for me.</p>
<h2>Datasets</h2>
<p>Three Yolov5 models are trained to detect fullbody and backfin, respectively.<br>
Scores improved greatly with changing datasets as: original image -> detic crop -> fullbody crop -> fullbody/backfin crop.</p>
<h2>Models</h2>
<ul>
<li>ArcFace models are trained using fullbody crop (640×640) and backfin crop (448×448), respectively. The parameters of ArcFace are set as scale = 25 and margin = 0.5 (The large margin worked with improved datasets in my case). Embedding size is 2048.</li>
<li>Focal loss for species classification (loss weight = 0.1) is also used.</li>
<li>Backbone: Efficientnet-b7 and ConvNeXt-L</li>
<li>In order to make embeddings more discriminative, multiply the feature maps by the attention weights computed with GAP features as a query. This worked only with Efficientnet.</li>
<li>Pseudo labeling: Take about 30% top predictions (would be better to use more data and multiple iterations).</li>
<li>Distillation: Use the features of the teacher model as soft target and compute MSE. This greatly improves the performance in the early stages of training, but the contribution to the final score does not seem to be that large, so adopt cosine schedule to set the final loss weight to 0.</li>
</ul>
<h3>Augmentation</h3>
<p>Average blur, motion blur, gaussian noise, saturation, brightness, contrast, grayscale and affine transform (flip, rotation, shear, scaling and translation) are used.</p>
<h3>Training details</h3>
<ul>
<li>Hyper parameters: AdamW with weight decay = 0.05, base lr = 2e-4, cosine scheduling with linear warmup, batch size = 24 along with gradient accumulation</li>
<li>PyTorch is used and training 30 epochs with a single A100 GPU take about 30 hours for a fullbody model and 15 hours for a backfin model.</li>
<li>Private LB of the best single fullbody model is 0.781.</li>
</ul>
<h2>Post-process</h2>
<ol>
<li>Cosine similarity matrix for the individuals is computed using the features of the test images and ArcFace weights using each model.</li>
<li>Then average the matrices to ensemble eight fullbody/backfiin models.</li>
<li>Insert new_individual with a fixed threshold (should have changed threshold for each species).</li>
<li>Finally, switch adjacent top predictions based on the difference of cosine similarity and the degree of assignment to the top for class balancing.</li>
</ol>
<h2>What did not work</h2>
<p>The following did not work in my case.</p>
<ul>
<li>Elasticface</li>
<li>Magface</li>
<li>Setting sample-wise margin based on the magnitude of losses</li>
<li>Triplet loss</li>
<li>Semi-supervised learning with self-distillation loss of DINO (somewhat worked, but employed pseudo-labeling)</li>
<li>Adding a SOD mask channel to input</li>
<li>Adding an attention weight channel to input</li>
</ul>
<p>Thank you for reading.</p> |
Happywhale - Whale and Dolphin Identification | 18th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Happywhale - Whale and Dolphin Identification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h2>Intro</h2>
<p>During this competition, we have seen several approaches to preprocess the competition's dataset. It includes full body cropping, back fin cropping and its variant with the removed background.<br>
Full body and back fin cropping approaches are very valuable independently of each other. We have made that conclusion when we have found that the model trained on the back fin dataset was better when trained on the bigger image sizes with optimal image size around 768x768. So there is no way to train models on full body images while keeping backfin size close to 768x768, and it is also unreasonable to get the final predictions using only back fin images because there are body pigmentations and other body features that can be helpful for identification. That’s why you 100% need to train at least two separate models on full body crops and back fin crops with consequent ensemble mechanisms.</p>
<h2>MLP</h2>
<p>So, we have trained several classifiers on different datasets and ensembled their embeddings using Multi Layer Perceptron (MLP). MLP was chosen because it has a better potential to get insights from embeddings of the models trained on different dataset sources. Specifically, we concatenated features from different models before feeding them to the MLP. It was trained using ArcFace. The model is very shallow and has dropouts at the beginning (0.15) and at the end (0.50). You can find more details in the attached GitHub repository. In general, MLP gave us roughly +5% to the best performing CNN classifier.</p>
<h2>CNNs</h2>
<p>In total, we had 8 CNN classifiers that were used in MLP training.</p>
<table>
<thead>
<tr>
<th>Model</th>
<th>Train set images</th>
<th>Test set images</th>
<th>LB score</th>
</tr>
</thead>
<tbody>
<tr>
<td>efficientnetv1-b7</td>
<td>mix-dataset</td>
<td>full body</td>
<td>81.6%</td>
</tr>
<tr>
<td>efficientnetv1-b6</td>
<td>mix-dataset</td>
<td>full body</td>
<td>79.8%</td>
</tr>
<tr>
<td>eca_nfnet_l2</td>
<td>mix-dataset</td>
<td>full body</td>
<td>79.3%</td>
</tr>
<tr>
<td>eca_nfnet_l2</td>
<td>mix-dataset</td>
<td>full body</td>
<td>81.2%</td>
</tr>
<tr>
<td>efficientnetv1-b6</td>
<td>back fin</td>
<td>backfin</td>
<td>77.8%</td>
</tr>
<tr>
<td>efficientnetv1-b6</td>
<td>mix-dataset</td>
<td>backfin</td>
<td>80.0%</td>
</tr>
<tr>
<td>efficientnetv1-b7</td>
<td>original images</td>
<td>original images</td>
<td>66.0%</td>
</tr>
<tr>
<td>efficientnetv1-b7</td>
<td>full body</td>
<td>full body</td>
<td>76.8%</td>
</tr>
</tbody>
</table>
<h2>Mix-dataset</h2>
<p>A big improvement came from the thing we called mix-dataset. During the mix-dataset training procedure our data loader was sampling images from different data sources (back fin and full body images), but testing images were always from a single dataset source. Efficientnetv1-b6 trained and tested on only backfin images had 77.8% LB score, but the same model with the same hyperparameters had 80.0% LB score when trained on mix-dataset and tested on backfin images. <br>
In my understanding, such an approach introduced more training diversity and prevented models from overfitting. An overfitting apparently took place in our training procedure because most of the time we saw >99% training accuracy with a tiny training loss. </p>
<h2>Class balancing</h2>
<p>We had two tricks to improve predictions generated by the MLP model. In the prediction .csv file we had 5 potential match candidates sorted by confidence, from the most confident to the least confident.</p>
<ol>
<li>In each row, If we see the first element which was also the first element in more than 10 other rows, and the second element which was never the first in other rows, then we swap them.<br>
[ind_1, ind_2, ind_3, …] -> [ind_2, ind_1, ind_3, …]</li>
<li>If the first element in the row is new_individual and the second element in the row is the element which was never the first in other rows, then we swap them.<br>
[new_ind, ind_1, ind_2, …] -> [ind_1, new_ind, ind_2, …]<br>
MLP without class balancing had 0.865 LB score and MLP with class balancing had 0.868 LB score.</li>
</ol>
<p>github repo: <a href="https://github.com/achilleess/happywhale-2022" target="_blank">https://github.com/achilleess/happywhale-2022</a><br>
This code does not include all of the above-mentioned models. You can find here training procedures for MLP, back fin models, and some of the full body models. The rest of the code may be released later.</p>
<p>Teammates:<br>
<a href="https://www.kaggle.com/neomaoro" target="_blank">@neomaoro</a> <br>
<a href="https://www.kaggle.com/chihantsai" target="_blank">@chihantsai</a><br>
<a href="https://www.kaggle.com/atom1231" target="_blank">@atom1231</a><br>
<a href="https://www.kaggle.com/kzvdar42" target="_blank">@kzvdar42</a> </p> |
Happywhale - Whale and Dolphin Identification | 19th Place - Single Model LB 860 Without Pseudo | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Happywhale - Whale and Dolphin Identification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks Remek. Congratulations to you and your team.</p>
<p>Training on 6 datasets just means concatenating the six datasets and then training on all <code>6x50k = 300k</code> images. So each epoch, the model will see 6 copies of every train image (with each one being a different crop bbox resize):</p>
<pre><code>train_dataset = get_training_dataset(
train_files1 + train_files2 + train_files3
+ train_files4 + train_files5 + train_files6)
model.fit(train_dataset, epochs=20)
</code></pre>
<p>In general adding additional heads (auxiliary learning) challenges the model to do multiple learning tasks at once and usually makes the first head (the one we care about) smarter. This is a classic trick. I used it in Pet Comp <a href="https://www.kaggle.com/competitions/petfinder-pawpularity-score/discussion/301015" target="_blank">here</a>. </p>
<p>Furthermore ArcFace loss clusters similar <code>individual_id</code> embeddings together in N-dimensional space (image <a href="https://www.kaggle.com/competitions/shopee-product-matching/discussion/226279" target="_blank">here</a>). So adding a second ArcFace loss on species will move similar species together which can help the model get the <code>individual_ids</code> located better (and improve the quality of the embeddings).</p>
<p>After adding a second head, our CV LB boosted about <strong>+0.003</strong> if I remember correctly. The TFRecords have both the <code>individual_id</code> and the <code>species</code> for each train image. Here is the model. (This code is taken from one of <a href="https://www.kaggle.com/ragnar123" target="_blank">@ragnar123</a> models):</p>
<pre><code>def get_model():
with strategy.scope():
margin1 = ArcMarginProduct(
n_classes = N_CLASSES,
s = 19,
m = 0.19,
name='head/arc_margin1',
dtype='float32'
)
margin2 = ArcMarginProduct(
n_classes = 26,
s = 19,
m = 0.19,
name='head/arc_margin2',
dtype='float32'
)
inp = tf.keras.layers.Input(shape = (*IMAGE_SIZE, 3), name = 'inp1')
label1 = tf.keras.layers.Input(shape = (), name = 'inp2')
label2 = tf.keras.layers.Input(shape = (), name = 'inp3')
x = efn.EfficientNetB7(weights = 'noisy-student',
include_top = False)(inp)
EMBEDDING = tf.keras.layers.GlobalAveragePooling2D()(x)
x = tf.keras.layers.BatchNormalization()(EMBEDDING)
x1 = margin1([x, label1])
x2 = margin2([x, label2])
output1 = tf.keras.layers.Softmax(dtype = 'float32')(x1)
output2 = tf.keras.layers.Softmax(dtype = 'float32')(x2)
model = tf.keras.models.Model(inputs = [inp, label1, label2],
outputs = [output1, output2])
opt = tf.keras.optimizers.Adam(learning_rate = LR)
model.compile(
optimizer = opt,
loss = [tf.keras.losses.SparseCategoricalCrossentropy(),
tf.keras.losses.SparseCategoricalCrossentropy()],
metrics=[tf.keras.metrics.SparseTopKCategoricalAccuracy(k = 5)]
)
return model
</code></pre>
<p>Thanks <a href="https://www.kaggle.com/liuzhangzhen" target="_blank">@liuzhangzhen</a> . Great job achieving solo 36th place Silver!</p>
<p>You can try this with the 4 datasets that Martin posted <a href="https://www.kaggle.com/competitions/happy-whale-and-dolphin/discussion/315524" target="_blank">here</a> and get a great CV LB score!</p>
<p>Then during inference just do this</p>
<pre><code>test_files1 = tf.io.gfile.glob(GCS_PATH1 + '/happywhale-2022-test*.tfrec')
test_dataset1 = get_test_dataset(test_files1)
test_embeddings1 = embed_model.predict(test_dataset1)
test_files2 = tf.io.gfile.glob(GCS_PATH2 + '/happywhale-2022-test*.tfrec')
test_dataset2 = get_test_dataset(test_files2)
test_embeddings2 = embed_model.predict(test_dataset2)
</code></pre>
<p>And then average the embeddings</p>
<pre><code>w1 = 0.5
w2 = 0.5
test_embeddings = w1*test_embeddings1 + w2*test_embeddings2
</code></pre>
<p>And infer the train embeddings the same way.</p>
<p>In popular public notebooks they use 1 dataset like this</p>
<pre><code>train_files = tf.io.gfile.glob(GCS_PATH + '/happywhale-2022-train*.tfrec')
train_dataset = get_training_dataset(train_files)
</code></pre>
<p>To use 2 datasets, just do this</p>
<pre><code>GCS_PATH1 = '../input/DATASET1'
GCS_PATH2 = '../input/DATASET2'
train_files1 = list( tf.io.gfile.glob(GCS_PATH1 + '/happywhale-2022-train*.tfrec') )
train_files2 = list( tf.io.gfile.glob(GCS_PATH2 + '/happywhale-2022-train*.tfrec') )
</code></pre>
<p>Then during training, just do </p>
<pre><code>train_dataset = get_training_dataset(train_files1+train_files2)
model.fit(train_dataset)
</code></pre>
<p>Great idea. We did try concatenation (of the 6 dataset embeddings) and it was worse CV. Perhaps the weighted average has less resultant columns (features) and generalizes better.</p>
<p>Our final ensemble was 12 models with 8 folds each. And each fold has 6 embeddings. This produced about 576 ( = 12 x 8 x 6) sets of embeddings. Using weighted average we reduced each of the 96 fold models to 1 embedding. After we had a total of 96 embeddings.</p>
<p>Using these embeddings we computed 5 predictions from each model. After we had 480 ( = 96 x 5) predictions. We used a voting ensemble to reduce these 480 predictions to 5 predictions (for each test image).</p>
<p>I do believe we could have used more embeddings concatenation and/or embeddings averaging instead of a voting ensemble to achieve a higher ensemble LB score.</p>
<p>The objective function is the CV score. Inside the Bayesian function, 6 weights are explored, both train and valid average embeddings are created, KNN is performed and CV score is calculated (over a range of thresholds). The result for each set of proposed set of 6 weights is the CV score.</p>
<p>The Bayesian optimization idea and code are from <a href="https://www.kaggle.com/ragnar123" target="_blank">@ragnar123</a> . Below is an excerpt from his code (modified for 2 datasets for easy reading). Bayesian optimization boosted CV LB about <strong>+0.003</strong> (compared with an equal average of embeddings)</p>
<pre><code>from bayes_opt import BayesianOptimization
pbounds = {'w1': (0.0, 1.0), 'w2': (0.0, 1.0)}
def get_optimization(w1, w2):
training_embeddings = w1 * tr_emb1 + w2 * tr_emb2
validation_embeddings = w1 * val_emb1 + w2 * val_emb2
# Nearest neighbors model
model = NearestNeighbors(n_neighbors = 100,
metric = 'cosine').fit(training_embeddings)
distances, indices = model.kneighbors(validation_embeddings)
best_map = 0
for thres in np.arange(0.20, 0.40, 0.01):
map5 = get_map(n_validation_images, distances, indices,
training_individual_id, validation_individual_id, thres)
if map5 > best_map:
best_map = map5
return best_map
optimizer = BayesianOptimization(
f = get_optimization, pbounds = pbounds)
optimizer.maximize(
init_points = 100, n_iter = 50)
w1 = optimizer.max['params']['w1']
w2 = optimizer.max['params']['w2']
</code></pre>
<p><a href="https://www.kaggle.com/gaborfodor" target="_blank">@gaborfodor</a> Can you describe the procedure you used to pseudo. I assume you took the X% most confident test predictions. </p>
<p>What i'm curious about is did you then train a model with these X% test and use that same model to predict those X% test? </p>
<p>Or did you split the X% test into 5 folds. Then include 4 folds in the training of a new model and use the model to predict the hold out test fold. And repeat this 5 times. Then concatenate the test predictions. (This way all test images are predicted using a model trained on different test pseudo than itself and gives the model a chance to correct its mistakes more).</p>
<p>Unfortunately, I thought it would not help and didn't want to waste 24 hours training a new model. </p>
<p>I read solutions from last year's comp where pseudo labels did not make a big difference and we did some experiments with KNN test embeddings. For example <code>KNN.fit(train + confident test)</code> then <code>KNN.predict(valid)</code> and we did not see a boost in CV score.</p>
<p>But now i know that to get results from ArcFace, you cannot just use test embeddings (from old ArcFace). You need to train new ArcFace on the actual test images with pseudo labels.</p>
<p><a href="https://www.kaggle.com/duykhanh99" target="_blank">@duykhanh99</a> Thanks KhanhVD. Congratulations to you and your team's strong finish. You were very close to gold. You'll get gold next time!</p>
<p>Did your team use pseudo labels? I'm curious because it seems that teams that used pseudo labels shook upward and teams that did not use pseudo labels shook down. Do you know why your team shook down from public to private?</p>
<h1>19th Place HappyWhale Solution</h1>
<p>We're very happy to present our 19th place HappyWhale solution. Our team consists of <a href="https://www.kaggle.com/ragnar123" target="_blank">@ragnar123</a> <a href="https://www.kaggle.com/mpware" target="_blank">@mpware</a> <a href="https://www.kaggle.com/bolkonsky" target="_blank">@bolkonsky</a> It was a pleasure to work with and learn from all these Kaggle Grandmasters. Thank you Kaggle, Ted Cheeseman, collaborators, and everyone at Happywhale for a great competition!</p>
<h1>Summary</h1>
<ul>
<li>Large Backbones like <strong>EffNetB7</strong>, Large Image sizes like <strong>768x768</strong></li>
<li>Two <strong>ArcFace</strong> modules. One for species and one for individual_id both <strong>m=0.19, s=19</strong></li>
<li>Six datasets; Mpware <strong>fullbody</strong>, Mpware <strong>fin</strong>, Jan fullbody, Jan fin, Phalanx detic, Awsaf yolo</li>
<li>Infer all six datasets and <strong>Bayesian optimize</strong> weighted average of six embeddings</li>
<li><strong>Eight fold CV</strong>. Tune <code>new_individual</code> threshold on CV. Best single model <strong>CV 0.865 LB 0.859</strong></li>
</ul>
<h1>Datasets</h1>
<p>We would love to train our models with only the original images and have the model learn to place attention on the dorsal fin and/or fullbody. However the images are very large like 2500x3500. So instead we used six datasets where important parts of the image were cropped and then resized to either 512x512, 640x640, or 768x768.</p>
<p><img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Apr-2022/box2.png" alt=""><br>
<img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Apr-2022/crop2.png" alt=""></p>
<h1>Models</h1>
<ul>
<li>Image Sizes 512x512, 640x640, 768x768</li>
<li>EffNetV1-B5, EffNetV1-B6, EffNetV1-B7</li>
<li>EffNetV2-L, EffNetV2-XL</li>
<li>ConvNext-L</li>
<li>Batchsize = 64</li>
<li>TensorFlow trained with <strong>Nvidia 8xV100 GPU</strong> and CoLab TPU</li>
</ul>
<h1>Train - One Model All Datasets</h1>
<p>We train one model on all 6 datasets for 20 epochs exponential learning rate decay with 5 epochs warmup. So the model sees every train image 120 times. We also use data augmentation and two ArcFace heads.<br>
<img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Apr-2022/train2.png" alt=""></p>
<h1>Infer - Bayesian Optimized Average of Six Embeddings</h1>
<p>We infer the model on each dataset separately and get six embeddings for each train and test image. Next we use Bayesian optimization on CV score to find the optimal weighted average of these 6 embeddings to use for KNN matching.<br>
<img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Apr-2022/infer2.png" alt=""></p>
<h1>Ensemble</h1>
<p>Our best single model had 8-Fold CV 0.866 and LB 0.859. We trained about a dozen different models and then ensembled the <code>96 = 12 x 8</code> fold models by using a voting ensemble of each fold model's 5 predictions. Our ensemble result was LB 0.868.</p>
<h1>How To Improve</h1>
<p>After reading other top teams' winning solutions, we believe that the next step to boost our model's LB would be to pseudo label the test images and then retrain our models using train and pseudo labeled test. I believe this can boost LB as much as <code>+0.010 to +0.020</code> !</p>
<h1>UPDATE</h1>
<p>After the competition ended, I retrained two of our 12 ensemble models using pseudo labels. The public LB boost <strong>+0.009</strong> and the private LB boost <strong>+0.016</strong>. This confirms that pseudo labeling is very powerful in this competition. </p>
<p>Just adding pseudo to two of our 12 ensemble models boosts our final placement to Gold Medal finish! I assume if we add multiple rounds of pseudo to all our ensemble models, we can climb further into Gold Medal zone!</p>
<p>No. I just did the simplest technique which is take the 66% most confident test predictions and make datasets with those test images and labels. Then i added all of those test images to every fold of a single model during training. Next I ensemble that new single model with our old ensemble.</p>
<p>The result is improved LB score. Next I take the top 66% from this new ensemble and repeat the procedure. I did 3 iterations with two of our single models. I think there are tricks to make pseudo more effective but i haven't tried them.</p>
<p>UPDATE: After the competition ended, I retrained two of our 12 ensemble models using pseudo labels. The public LB boost <strong>+0.009</strong> and the private LB boost <strong>+0.016</strong>. This confirms that pseudo labeling is very powerful in this competition. </p>
<p>Just adding pseudo to two of our 12 ensemble models boosts our final placement to Gold Medal finish! I assume if we add multiple rounds of pseudo to all our ensemble models, we can climb further into Gold Medal zone!</p>
<p>Species classification uses both ArcFace and Softmax. To do classification, you need Softmax. Using ArcFace is optional. So the question is, why did we additionally use ArcFace?</p>
<p>The reason comes from understanding ArcFace explained <a href="https://www.kaggle.com/competitions/shopee-product-matching/discussion/226279" target="_blank">here</a>. We want high quality embeddings. In dolphin/whale matching, we don't use the output of either the <code>individual_id</code> classifier nor the <code>species</code> classifier (we only use embeddings from NN hidden layer). </p>
<p>Using ArcFace improves the quality of embeddings. It moves the N-dimensional vectors around in N-dimensional space and clusters them together. Using two ArcFace helps move the embeddings into better clusters to help us better find matches.</p>
<p>Great job Robson! Another trick that gave us a big boost was to replace the bbox in the dorsal fin dataset of the 3 species that don't have dorsal fin. For those 3 species we used the full body dataset bbox instead. This kept all 6 datasets clean with high quality meaningful bbox.</p>
<p>Thanks Satoshi</p>
<p>Thanks Awsaf! Thanks for your YOLO bbox dataset!</p> |
Happywhale - Whale and Dolphin Identification | 1st Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Happywhale - Whale and Dolphin Identification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>We optimized a single-fold validation score on all train data by Optuna search. As I explained above, we used the small model and images so that training completes quickly on a single V100 (16GB). I might not remember accurately, but I think it took around one day to run 100+ trials by 8 processes in parallel.</p>
<p>I'm sorry that we can not remember the detail, but different lrs enhanced our CV scores significantly in both of our pipelines.<br>
Thank you for pointing out the flip augmentation! We just forgot to add that to the list. Updated.</p>
<p>You are right on the bbox mix. As <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> pointed out, backfin is assigned 15%.<br>
For "new_individual", we just inserted the prediction of "new_individual" as a certain threshold, so it appears as the 3rd, 4th, and 5th predictions as well.</p>
<p>This was my first participation in a Kaggle competition and I was so fortunate to win 1st place!!!!<br>
I appreciate my team member <a href="https://www.kaggle.com/charmq" target="_blank">@charmq</a>!</p>
<p>We really enjoyed the competition and worked hard literally until the last minutes. We would like to deeply appreciate the Kaggle staff for organizing this great competition and other teams for competing with us.</p>
<h2>Overview</h2>
<p>Our solution is based on sub-center ArcFace with Dynamic margins, which was shown to be effective in <a href="https://www.kaggle.com/competitions/landmark-recognition-2020/discussion/187757" target="_blank">Google Landmark Recognition 2020 the 3rd place solution</a> by <a href="https://www.kaggle.com/boliu0" target="_blank">@boliu0</a> and <a href="https://www.kaggle.com/c/landmark-recognition-2021/discussion/277098" target="_blank">the first place solution of Google Landmark Recognition 2021</a> by <a href="https://www.kaggle.com/christofhenkel" target="_blank">@christofhenkel</a>.</p>
<p>Basically, our solution was an ensemble of two pipelines implemented by <a href="https://www.kaggle.com/charmq" target="_blank">@charmq</a> and me. Since we shared knowledge with each other during the competition, our pipelines share much in common. Below, I will mainly explain my pipeline, which was slightly better in CV score. </p>
<h2>Key Points</h2>
<ul>
<li>Tuning of dynamic margin hyperparameters with <a href="https://optuna.org/" target="_blank">Optuna</a></li>
<li>Larger learning rate for ArcFace head</li>
<li>Bounding box mixing augmentation</li>
<li>Ensemble of knn and logit</li>
<li>Two-round pseudo labeling</li>
<li>Ensemble of many models</li>
</ul>
<h2>Dataset</h2>
<p>We used several types of bounding boxes to crop images. We thank <a href="https://www.kaggle.com/jpbremer" target="_blank">@jpbremer</a> and <a href="https://www.kaggle.com/phalanx" target="_blank">@phalanx</a> a lot for providing valuable datasets. We also trained our own yolov5 model using fullbody annotations, which we refer to as fullbody_charm.</p>
<p>For train data, we randomly mixed several bboxes with the ratio of fullbody:fullbody_charm:backfin:detic:none=0.60:0.15:0.15:0.05:0.05. Especially, combining backfin bbox to train data significantly improved the performance possibly because it enhances the robustness to images that only contain backfins. Adding non-cropped images by a small ratio also worked as a regularization. For test data, we took the mean of predictions between fullbody and fullbody_charm.</p>
<p>We used the cropped images resized to a fixed size. We mainly used the image size of (1024, 1024). Some models were trained with the image size of (1200, 1200) and (1440, 1440) for ensembling.</p>
<h2>Backbone & Neck</h2>
<p>We trained several different imagenet-pretrained backbones for ensembling (efficientnet_b5, efficientnet_b6, efficientnet_b7, efficientnetv2_m, efficientnetv2_l, etc). The best performance in a single model was achieved by efficientnet_b7.</p>
<p>Using GeM pooling (p=3) instead of GAP enhanced the performance.</p>
<p>The normalization layer before the ArcFace head was important. Batchnorm was slightly better than Layernorm in our experiments.</p>
<p>In addition to the final feature map of the backbone, we used the second final feature map to capture more local information. We simply concatenated those two GeM-pooled feature maps and passed them to head.</p>
<h2>Head</h2>
<p>For handling imbalanced classes, we adopted ArcFace with dynamic margins. Since it seemed sensitive to hyperparameters, we tuned them on images of (256, 256) and efficientnet_b0 using Optuna. It seemed the acquired hyperparameters also worked well on large images and architectures.</p>
<p>In the last competition, it was reported that handling flipped images as different classes significantly enhanced the performance. In this competition, we did not think that this technique works well because some images are taken from different angles. To handle this issue, we adapted the sub-center ArcFace of k=2 with the usual flip data augmentation.</p>
<p>We also added a second head for classifying species. Sub-center ArcFace with dynamic margins worked better than simple Linear head.</p>
<h2>Training</h2>
<p>While we mainly checked a single-fold validation score locally, we trained our models by using whole train data for submission.<br>
Setting the learning rate of the head 10 times bigger than the learning rate of the backbone significantly improved the performance.<br>
Optimal training settings of us differed possibly due to slight differences in our pipelines. While I trained the models for 30 epochs by AdamW optimizer of lr_backbone=1.6e-3 with warmup cosine annealing scheduler, charmq trained the models for 20 epochs by Adam of lr_backbone=1e-4 with cosine annealing scheduler. Most of the models were trained with the batch size of 16-32 on 2-8x NVIDIA Tesla V100 (32GB).</p>
<p>In addition to the bounding box mix augmentation described above, we adopted many data augmentations because the models were expressive enough to reach almost 100% of training accuracy. Below are the list of data augmentations we used by Albumentations implementation:</p>
<pre><code>A.Affine(rotate=(-15, 15), translate_percent=(0.0, 0.25), shear=(-3, 3), p=0.5),
A.RandomResizedCrop(image_size[0], image_size[1], scale=(0.9, 1.0), ratio=(0.75, 1.3333333333)),
A.ToGray(p=0.1),
A.GaussianBlur(blur_limit=(3, 7), p=0.05),
A.GaussNoise(p=0.05),
A.RandomGridShuffle(grid=(2, 2), p=0.3),
A.Posterize(p=0.2),
A.RandomBrightnessContrast(p=0.5),
A.Cutout(p=0.05),
A.RandomSnow(p=0.1),
A.RandomRain(p=0.05),
A.HorizontalFlip(p=0.5),
</code></pre>
<h2>Postprocess</h2>
<p>We combined the following two metrics.<br>
knn: Using feature vectors, we calculated the largest cosine similarity for each class in training data. This can be done for each model and is easier to ensemble many models than feature concatenation. In implementation, we looked at only the top 500 training data for each test individual by <code>sklearn.neighbors.NearestNeighbors</code>. As a test-time augmentation, we took the mean of original test images and flipped test images searched over both original train images and flipped train images.<br>
logit: We used the simple output of the model without margins.<br>
(We calculated the mean of predictions for two bounding boxes as explained above.) While knn worked much better in CV but only slightly better in the public leaderboard than logit. This is probably caused by highly imbalanced data and the distribution differences between train and test (knn is more likely to output classes with more train data). To mitigate this, we mixed the prediction of knn and logit with knn_ratio=0.5. After pseudo labeling, we increased the knn_ratio to 0.8.</p>
<p>We labeled individuals whose ensembled predictions are lower than a certain threshold as “new_individual”. Through several submission trials, we decided to set the threshold so that the ratio of “new_individual” as the first prediction is 0.165.</p>
<h2>Other</h2>
<p>Pseudo labeling enhanced the public LB score a lot in this competition probably because of extremely imbalanced data. We stuck to improving CV scores until the very final stage of the competition. On the day before the deadline, we got a big boost in the leaderboard score (0.88589/0.85959 -> 0.89343/0.87062) by a pseudo-label submission. The second round of pseudo labeling on the final day also improved the score (0.89680/0.87579). Additional rounds of pseudo labeling might have further improved performance, but unfortunately, we did not have time to do that.</p>
<p>As a final submission, we ensembled around 50 models including the ones trained without pseudo labels and with the first-round pseudo labels. After the competition, we confirmed that the ensemble of only 2 models (the best one from each of us) scored 0.89385/0.87336, which could still win first place!</p>
<h2>What did not work</h2>
<ul>
<li>input 4-channel images with segmentation mask (1st place solution of the last competition)</li>
<li>input 6-channel images combining 2 types of images cropped by fullbody and backfin bboxes</li>
<li>input rectangle images such as (512, 1024)</li>
<li>maintain aspect ratio of images by bounding box expansion instead of resizing</li>
<li>AdaCos, triplet loss</li>
<li>Focal loss</li>
<li>training p of GeM pooling</li>
<li>ConvNeXt</li>
<li>Swin Transformer (384 was too small)</li>
<li>dolg</li>
<li>using pseudo labels for knn</li>
<li>cutmix</li>
</ul>
<h2>Acknowledgement</h2>
<p>We deeply acknowledge great OSS such as PyTorch, PyTorch Lightning, PyTorch Image Models, Albumentations, etc. We would also like to appreciate Preferred Networks, Inc for allowing us to use computational resources.</p>
<p>[Update 2022/05/24]<br>
I added explanations on some minor points.<br>
code (knshnb): <a href="https://github.com/knshnb/kaggle-happywhale-1st-place" target="_blank">https://github.com/knshnb/kaggle-happywhale-1st-place</a><br>
code (charmq): <a href="https://github.com/tyamaguchi17/kaggle-happywhale-1st-place-solution-charmq" target="_blank">https://github.com/tyamaguchi17/kaggle-happywhale-1st-place-solution-charmq</a></p>
<p>Thank you!!!</p>
<p>Thanks for your comments. Congratulations on your solo gold, too!!!</p>
<p><a href="https://www.kaggle.com/philippsinger" target="_blank">@philippsinger</a> Thanks for your comments! In our experiment, pseudo labeling by an ensembled prediction for validation data improved a CV score of a single model. We did not test the effect of the ensemble of models trained by pseudo labeling in CV, though.<br>
Anyway, I was amazed that you achieved such a high score without pseudo labeling with the image size 512! I would be very interested if you could elaborate more on your solution in discussion or somewhere.</p>
<p>Thank you for merging with me too. It was a great experience working with you in my first competition!</p>
<p>Thanks! Using fp16 training, that case was fitted in 4x V100 (32GB).</p>
<p>backbone -> bn -> arcface (id), arcface (species)<br>
Please also refer to the source code, which we are planning to release soon.</p> |
Happywhale - Whale and Dolphin Identification | 25th summary | Happywhale ?... No, this is DATASET competition 😅😅😅 | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Happywhale - Whale and Dolphin Identification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>No. just different param setting </p>
<p>backfin crop, sod mask/crop, detic crop … and many many crop dataset dominate this competition. Thanks <a href="https://www.kaggle.com/jpbremer" target="_blank">@jpbremer</a> releasing their dorsal and full body datasets, and [MFGA]Make Feature-engineering Great Agein 🤔. We are still in the epoch of "manualy" intelligence .</p>
<p><strong>about single model</strong></p>
<ol>
<li>Thanks to backfin crop dataset, we crossed the 0.76 for the first time.</li>
<li>Then we aggregated all the manually fullbody annotated datasets, train yolov5 detector.</li>
<li>With the new fullbody crop dataset, we train backfin/fullbody/sod model, and combine them, first time cross 0.811, and then, try larger image size 512 -> 768 -> 1080 … , larger model size convnext-small -> base, effn-b4 -> effn-b7, got some booster on lb/cv, and some model diversity;</li>
<li>Multiple-task training work, train model on both individual fine-grained task and species classification task give us 0.02 boost.</li>
<li>pseudo is efficient, when our ensemble model got 0.851, we use this submission prediction(without new_individual) train single model, got 0.850 lb.</li>
</ol>
<p><strong>two-stage ensemble strategy</strong></p>
<p>First, we use the fold-train models, predict val dataset, then calculate the ratio of #1 is new_individual , the ratio of #1 models prediction is same (w/o new_individual) on each image. Then do the statistic, groupby the two ratios and calculate the true label ratio on each group;</p>
<p>Statistic rule apply, on the test dataset, we calculate the full-train data #1 is new_individual , the ratio of #1 models prediction is same (w/o new_individual) on each image too, and then use the statistic to determine the new_individual place. The other predictions ensemble by place weight is the same as the public kernel.</p>
<p><strong>No time to try</strong></p>
<ol>
<li>transfer learning by species/other similarity image/ original image …</li>
<li>multiple cnn branch model + mask layer. we have many crop data source .. backfin/sod/fullbody, each cnn branch input one, add some mask layer for combine the cnns output..</li>
<li>image embedding ensemble</li>
<li>post rank model</li>
<li>power of TPU …</li>
</ol>
<p>Thanks for sharing, we have learned a lot . 😄 see you in next comp.</p>
<p>lol 😄😄😄</p>
<p>有机会可以一起组队 😀</p>
<p>我看了你发布的解决方案,很棒的想法,train/val loss gap 太大了,恭喜你们收获前三!!</p> |
Happywhale - Whale and Dolphin Identification | 2nd place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Happywhale - Whale and Dolphin Identification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you to the competition host, Kaggle and all participants. And congrats to the winners.<br>
Many thanks to <a href="https://www.kaggle.com/jpbremer" target="_blank">@jpbremer</a>. We used the public fullbody dataset and annotation as our main dataset. Without it we could not achieve a high score in a short period of time.</p>
<h1>Single model</h1>
<h2>Dataset</h2>
<p>Two different datasets were used. </p>
<ul>
<li>fullbody dataset (yolov5): <a href="https://www.kaggle.com/competitions/happy-whale-and-dolphin/discussion/311184" target="_blank">this dataset</a> cropped by yolov5. </li>
<li>fullbody dataset (yolox): the above annotations were used for GT and trained yolox.</li>
</ul>
<p>All models were trained using the fullbody dataset (yolov5) before the last day of the contest. Finally, a few models were fine-tuned a few epochs using the fullbody dataset (yolox).</p>
<h2>Model</h2>
<p>efficientnet_l2 worked the best in validation</p>
<h3>Training Recipe</h3>
<ul>
<li>backbone = tf_efficientnet_l2_ns</li>
<li>img_size = 768</li>
<li>loss = Arcface with adaptive margin</li>
<li>augmentation = Horizontal flip, RandAugment</li>
<li>optimizer = SGD</li>
<li>scheduler = CosineDecay with warmup</li>
<li>batch_size = 16 per GPU</li>
<li>n_epoch = 20</li>
</ul>
<p>RandAugment improved the validation score quite a bit.</p>
<h2>Pseudo labeling</h2>
<p>This is the key to getting a high score on the leaderboard.</p>
<p>We use FC layer prediction (<code>(logits * scale).softmax(-1)</code>) of trained models to generate pseudo labels. The confidence threshold was set to 0.8.<br>
Every pseudo labeling round was trained from imagenet pretrained weights.</p>
<p>Following are the leaderboard scores of each round. We used flip testing starting round3.</p>
<p>Pseudo label rounds: backbone = efficientnet_l2</p>
<table>
<thead>
<tr>
<th>Pseudo label round</th>
<th>Dataset</th>
<th>Public score</th>
<th>Private score</th>
</tr>
</thead>
<tbody>
<tr>
<td>initial model (round1)</td>
<td>fullbody (yolov5)</td>
<td>0.846</td>
<td>0.812</td>
</tr>
<tr>
<td>round2</td>
<td>fullbody (yolov5)</td>
<td>0.875</td>
<td>0.849</td>
</tr>
<tr>
<td>round3</td>
<td>fullbody (yolov5)</td>
<td>0.885</td>
<td>0.860</td>
</tr>
<tr>
<td>round4</td>
<td>fullbody (yolov5)</td>
<td>0.889</td>
<td>0.862</td>
</tr>
<tr>
<td>round5</td>
<td>fullbody (yolov5)</td>
<td>0.887</td>
<td>0.863</td>
</tr>
<tr>
<td>round5 (fine-tuning)</td>
<td>fullbody (yolox)</td>
<td>0.891</td>
<td>0.870</td>
</tr>
</tbody>
</table>
<h2>Make submission</h2>
<p>The normal image retrieval method. Calculate cos similarity with train dataset and get top5 ids.</p>
<p>A fixed cosine similarity of 0.5 was used to insert “new_individual”. There is nothing special post-processing, such as species-specific thresholds.</p>
<h1>Last ensemble</h1>
<p>The final submission is a 4-model ensemble of efficientnet_l2 belows. (public=0.897 / private=0.872)</p>
<p>Ensemble models: all backbone = efficientnet_l2</p>
<table>
<thead>
<tr>
<th>Pseudo label round</th>
<th>Image size</th>
<th>Dataset</th>
</tr>
</thead>
<tbody>
<tr>
<td>round3</td>
<td>(1024, 1024)</td>
<td>fullbody (yolov5)</td>
</tr>
<tr>
<td>round4</td>
<td>(768, 768)</td>
<td>fullbody (yolov5)</td>
</tr>
<tr>
<td>round5</td>
<td>(768, 768)</td>
<td>fullbody (yolox) fine-tuned</td>
</tr>
<tr>
<td>round5</td>
<td>(768x2, 768)</td>
<td>vertical concatenated original dataset and fullbody (yolox)</td>
</tr>
</tbody>
</table>
<h1>Performance tips</h1>
<h2>Gradient checkpointing</h2>
<p>To train efficientnet_l2 on RTX3090, gradient checkpoint is a must. With gradient checkpointing and mixed precision, we could train the network with batch_size 16 on a single RTX3090. Without it, even batch size 2 gives OOM.</p>
<h2>PyTorch build</h2>
<p>We found that different PyTorch builds can greatly affect training throughput. We compared the official release and the build by NVIDIA.</p>
<table>
<thead>
<tr>
<th>docker image</th>
<th>cudnn_benchmark</th>
<th>training throughput (imgs/sec)</th>
</tr>
</thead>
<tbody>
<tr>
<td>pytorch/pytorch:1.11.0-cuda11.3-cudnn8-devel</td>
<td>False</td>
<td>2.5</td>
</tr>
<tr>
<td>pytorch/pytorch:1.11.0-cuda11.3-cudnn8-devel</td>
<td>True</td>
<td>3.4</td>
</tr>
<tr>
<td>nvcr.io/nvidia/pytorch:22.02-py3</td>
<td>False</td>
<td>4</td>
</tr>
<tr>
<td>nvcr.io/nvidia/pytorch:22.02-py3</td>
<td>True</td>
<td>4.1</td>
</tr>
</tbody>
</table>
<p>The build from NVIDIA is 20% faster. Probably because of the updated CUDA/CuDNN version.</p>
<h1>Acknowledge</h1>
<p>takuoko is a member of Z by HP Data Science Global Ambassadors. Special Thanks to Z by HP for sponsoring me a Z8G4 Workstation with dual A6000 GPU and a ZBook with RTX5000 GPU.</p>
<p>In the latest master branch of timm, gradient checkpointing is available.</p>
<p><a href="https://github.com/rwightman/pytorch-image-models/blob/01a0e25a67305b94ea767083f4113ff002e4435c/timm/models/efficientnet.py#L527-L528" target="_blank">https://github.com/rwightman/pytorch-image-models/blob/01a0e25a67305b94ea767083f4113ff002e4435c/timm/models/efficientnet.py#L527-L528</a></p> |
Happywhale - Whale and Dolphin Identification | 30th Simple Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Happywhale - Whale and Dolphin Identification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I would like to thank the organizers for hosting the great competition.<br>
also, I congratulate all the top players.<br>
I was very honored to become a teammate with <a href="https://www.kaggle.com/hwigeon" target="_blank">@hwigeon</a> </p>
<h2>Data</h2>
<p>We started the original dataset + detic crop at first.<br>
We doubled dataset with original dataset + detic crop in the same training phase to make our arcface learn a margin well. <br>
but finally, We manually made a cropped full-body dataset itself and replace this with the detic crop.</p>
<h2>Models</h2>
<p>Efficientnet 5,6,7+Arcface or Dolg, Nfnet, Nfnet+hybridVit, Nfnet+dolg were used. <br>
Arcface margin = 0.3 is chosen.<br>
The size range is [768~1024]<br>
RAdam+Lookahead or Madgrad is chosen as optimizer. <br>
Original dataset + our manually cropped dataset reached public LB 0.83x easily<br>
After Pseudo labeling, we could have scored public LB 0.858, private LB 0.826 </p>
<h2>Augmentation</h2>
<p>Flip augmentation for Nfnet networks.<br>
Flip + Hsv augmentation for Efficientnet. </p>
<h2>Validation</h2>
<p>A valid dataset that only has 2 individuals are chosen as holdout. <br>
We didn't care about new individuals since we believed that the top1 score matters most.<br>
the demerit of our validation set is that it can't post-process our results to predict new_individuals.<br>
Our top1 cv reached 0.89x </p>
<h2>Postprocess</h2>
<p>to use KNN, we concatenated all the embeddings to axis =-1<br>
after We used KNN=1 and gather nearest 880 individuals.<br>
new_individual threshold = 0.375,0.4,0.425,0.45 and voted five submissions using 1:1:1:1:1 weights.<br>
we can get value of cosine inner product output from arcface output and used it for reranking, but it doesn't help much. </p>
<h2>What didn't work</h2>
<p>Convnext, efficinentnetv2, swin transformer(384size) didn't work well for us. </p>
<h2>Regret</h2>
<p>We missed backfin while most of the competitors use backfin.<br>
Many reports that using backfin is crucial for this competition.<br>
We used a little bit of a backfin, but it didn't seem so good, so we moved on right away. </p>
<p>We did our best while we were very busy with work and school, but we are sad to receive the silver medal. I will continue to make a lot of effort. </p>
<p>Thank you for reading this!</p>
<p>Thanks a lot👍</p> |
Happywhale - Whale and Dolphin Identification | 3rd solution【Part】 | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Happywhale - Whale and Dolphin Identification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I will realease the training notebook soon😊, and my cv may different from yours(because our valid data is less than yours, just images with backfins).</p>
<p>I use optuna, here is the <a href="https://optuna.org/" target="_blank">link</a>.</p>
<p>approximately 17000, and it can boost about 0.03-0.04.</p>
<p>用的是训练集和测试集concat在一起做normalize </p>
<pre><code>def normalize_list_numpy(list):
normalized_list = list / np.linalg.norm(list, ord=1)
return normalized_list
</code></pre>
<p>We just use the public one, of course we try to re-tag the backfin and train a new detetor but the cv score has not been improved..</p>
<p>Thanks, you did a great job too!</p>
<p>Nothing magic, you will see in the pulic notebook soon.</p>
<p>We tried v2 XL with same dataset, the cv score is around 0.875.</p>
<p>对原始数据集的改进很重要🤓</p>
<p>Congratulations to all the winners. Thanks Happywhale and Kaggle for hosting this interesting competition.</p>
<p>Thanks for great teammates <a href="https://www.kaggle.com/biglafe" target="_blank">@biglafe</a> 、@tikboa 、@qiuyanxinjupiter、@cuteffff . Congratulations to us!</p>
<p>Total solution can be found <a href="https://www.kaggle.com/competitions/happy-whale-and-dolphin/discussion/319896" target="_blank">here</a>, and this is our part solution. </p>
<ol>
<li>Brief. In our part solution, we use this wonderful notebook <a href="https://www.kaggle.com/code/jpbremer/backfin-detection-with-yolov5" target="_blank">backfin-detection-with-yolov5</a> to get data for part. The main idea is model can be easier to learn from pictures with backfin(seems as part), it can be used for ensembling with body,and substitute the predictions of all whales without a backfin in the end.</li>
<li>Model. We use effv1-b7 with Arcface and resolution of 512x512 for model 1, it has cv around 880, use effv1-b7 with CurricularFace + bnneck and resolution of 512x512 for model 2, it has cv around 880 too, training notebook can be found <a href="https://www.kaggle.com/code/librauee/train-newpse-backfins-b7-512-curri-fold1" target="_blank">here</a> and <a href="https://www.kaggle.com/code/librauee/train-newpse-backfins-b7-512-arc-fold1" target="_blank">here</a>, nothing magic.</li>
<li>Train. We use kaggle TPU resource to train above model for 3folds of 10folds random split, pseudo label is used from ensemble model predicting the test set. </li>
<li>Ensemble. For each fold, we concat 2 part model and 6 body model embddings for total 8 * 512=4096 dimension to calculate distance and select the nearst results, we also use new_id replace for postprocessing. Notice that, before concat embedding, we need to normalize the embdding for each 512 dimension internally. We get around 0.930 local cv for ensembling. </li>
<li>Optimize. We use bayesian optimization for finding the best weight of each 512 embedding,It can also works in private LB.</li>
</ol>
<p>Thanks for your reading!</p>
<p>Score 0.88 is not LB, just local cv(contains about 4k images with backfins).</p>
<p>Of course, here are the dataset source:</p>
<ol>
<li>training and test dataset<br>
<a href="https://www.kaggle.com/datasets/librauee/randomdatasetc" target="_blank">https://www.kaggle.com/datasets/librauee/randomdatasetc</a></li>
<li>pseudo dataset<br>
<a href="https://www.kaggle.com/datasets/librauee/randomdataseth" target="_blank">https://www.kaggle.com/datasets/librauee/randomdataseth</a></li>
</ol>
<p>and you can get the GCS path by the follows:</p>
<p><code>GCS_PATH1 = KaggleDatasets().get_gcs_path('randomdatasetc')</code></p>
<p>用roberta、deberta啥的跑个0.96没问题呀,我主要是用lgb做的,做的特征可能好点👀</p> |
Happywhale - Whale and Dolphin Identification | 49 place - Silver Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Happywhale - Whale and Dolphin Identification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I want to thank Kaggle for the super platform and super community. Everything I learned about DS I learned on Kaggle. Secondly, thanks to the organizers of the contest for an interesting task, it helped me get distracted from sad thoughts.</p>
<p><strong>Dataset</strong><br>
The key point in the competition was the dataset. I used <strong>Baskfins</strong> and <strong>Fullbody</strong> datasets by JAN BRE<br>
There was an idea to use an intermediate dataset, but I didn’t have time.<br>
I also tried a dataset with water removed, but the result was much worse. I added it in ensemble.</p>
<p><strong>Model</strong><br>
I tried Effnet models and Effnet 7 work better.<br>
ArcFace loss.<br>
Image resolution is 768. I tried 1024, but model does not converge(<br>
I also tried ConvNext result was worse, but it added in ensemble.<br>
30-50 epochs with 5 epochs warm-up and then cosine decrease.<br>
There was no overfit at all. More epochs - better results )</p>
<p><strong>CV and Ensemble</strong><br>
I connected embeddings from 9 models and found the cosine distance between train and test images.<br>
Since there was a lot of data and very little time at the TPU, I did cross-validation on only 1 fold. <br>
And get 0.72 CV without <strong>new_individual</strong> and CV 0.86 with threshold 0.6<br>
And then trained all models on 100% train data, and use threshold 0.65 since in test dataset less <strong>new_individual</strong></p>
<p><em>Thanks for reading!</em></p>
<p>A big boost. For example single model B7 - 768 on Backfins DS 1 fold i got 0.779 on Public LB, and 0.733 Private. And same model trained on 100% train data Backfins DS 0.808 Public and 0.762 Private. With same threshold 0.5, same epochs, same LR.</p> |
Happywhale - Whale and Dolphin Identification | 4th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Happywhale - Whale and Dolphin Identification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks. Yeah I much prefer Pytorch. The first thing I do when I start working on a TF project is to convert everything into pytorch :) </p>
<p>Thanks. We used pytorch with <a href="https://github.com/rwightman/pytorch-image-models" target="_blank">timm</a> library. We used GPUs only. The largest models were trained on 8x V100s with 32G mem. Some smaller models including yolov5 were trained on 4x A100s with 40G mem.</p>
<p>Congrats to all the winners. It has been a tough competition. The top 2 teams had a very strong last push. Thanks for the great collaboration once again, my long time teammate <a href="https://www.kaggle.com/haqishen" target="_blank">@haqishen</a> </p>
<h1>TL;DR</h1>
<ul>
<li>Yolov5 Detection</li>
<li>Dynamic Margin ArcFace with DOLG CNN backbone<ul>
<li>Best backbone is ConvNext</li></ul></li>
<li>Two heads: predicting ids and species</li>
<li>3 rounds of pseudo labeling</li>
<li>Tune new_individual threshold separately for each species</li>
</ul>
<h1>Detection</h1>
<p>We trained yolov5m with image size 512 for 40 epochs. We started by using this <a href="https://www.kaggle.com/datasets/awsaf49/happywhale-boundingbox-yolov5-dataset" target="_blank">public notebook's</a> predictions as labels. Then visualize examples with low OOF confidence. If the predicted bbox are wrong, remove this example from training set, or fix the ground truth bbox. </p>
<p>We iterate this for 9 rounds. In the end, most OOF predictions look correct. The OOF iou = 0.93863. But some test set prediction bboxs are still off. In hindsight, we probably should start from scratch and label thousands of images, like other top teams did.</p>
<p>After getting the predicted bbox, we extend it by 20% and feed the crop to arcface model.</p>
<h1>Model</h1>
<p>The modeling part is heavily influenced by the recent landmarks competitions. The architecture is Dynamic Margin ArcFace with DOLG CNN backbone. The dynamic margin arcface was introduced by us in last year's Landmark, see detail <a href="https://www.kaggle.com/competitions/landmark-recognition-2020/discussion/187757" target="_blank">here</a>. The DOLG was introduced to the Kaggle community by <a href="https://www.kaggle.com/christofhenkel" target="_blank">@christofhenkel</a> in this year's Landmark, see detail <a href="https://www.kaggle.com/competitions/landmark-retrieval-2021/discussion/277099" target="_blank">here</a>.</p>
<p>Other components of the top landmark solutions didn't work here though, including sub center arcface, and vision transformers. All the vision transformers underperform CNNs. The best CNN in our solution is ConvNext.</p>
<p>The only twist in the model is that we output two heads, predicting both individual_id and species.</p>
<p>For augmentations we used the following plus mixup</p>
<pre><code> A.HorizontalFlip(p=0.5),
A.RandomContrast(limit=0.2, p=0.75),
A.ShiftScaleRotate(shift_limit=0.0, scale_limit=0.3, rotate_limit=10, border_mode=0, p=0.7),
A.HueSaturationValue(hue_shift_limit=20, sat_shift_limit=30, val_shift_limit=20, p=0.5),
</code></pre>
<h1>Training steps</h1>
<ol>
<li>Tune everything on 5 fold. i.e. train on 80% of training data per fold, "sacrificing" some individual_ids in order to do cross validation.</li>
<li>Ensemble 9 fold models to make pseudo labels.</li>
<li>Train 6 best models on 100% training data + pseudo labeled test data</li>
<li>Make new pseudo labels from step 3 ensemble</li>
<li>Repeat step 3-4 for two more rounds</li>
<li>Ensemble the 12 models from last two rounds.</li>
</ol>
<p>Average single model's public LB in 3 pseudo label rounds are 0.855, 0.862, 0.865</p>
<h1>Ensemble</h1>
<p>The best performing backbone is ConvNext. We also trained 3 non-ConvNext for diversity. The final six models are ConvNext Base, Large, XLarge, EfficientNet B7, V2L, NFNet L2. We used diversified image sizes for different models: 640, 672, 704, 736, 768, 800, 832, 864, 896, 960, 1024</p>
<p>The ensemble is done by concatenating single models feature, before computing cosine similarity.</p>
<h1>new_individual threshold</h1>
<p>Since final models are trained on 100% of training data, there's no validation scores. New_individual's thresholds therefore need to be tuned on the LB. </p>
<p>We noticed that different species have different levels of difficulty to predict, hence the optimal new_individual thresholds are different for different species. Tuning each species on the LB is riskly, so we: (1) tuned each species' <em>relative</em> threshold separately on CV, (2) tuned overall threshold level on LB, (3) adjust the LB-optimal overall threshold level by CV-optimal species relative thresholds.</p>
<p>This adjustment boosted our public LB from 0.882 or 0.883 to 0.888</p>
<p>We used an ensemble of 15 models’ species head to predict test set's species.</p>
<h1>Things that didn't work</h1>
<ul>
<li>Vision Transformers</li>
<li>Reranking Post processing</li>
<li>Using the previous humpback whale competition's data to pretrain</li>
<li>Use HFlip to double the number of individual_ids, a trick that worked well in previous humpback whale competition</li>
</ul>
<p>Thanks. Yeah, couldn't agree more. 💯</p>
<p>Thanks. Nothing special to train ConvNext. We used the <a href="https://github.com/rwightman/pytorch-image-models" target="_blank">timm</a> library, which makes it super easy to exchange backbones. The backbones we used are <code>convnext_xlarge_384_in22ft1k</code>, <code>convnext_large_384_in22ft1k</code>, <code>convnext_base_384_in22ft1k</code></p>
<p>People reported good ConvNext performance in a previous competition, I think PetFinder. I also got equal or better results with it on a non-competition project than EffNets.</p>
<p>Thank you. We haven't tracked top1 or top5 accuracy, only the map5 score.</p>
<p>We don't plan to open source the code this time. We didn't do anything special during inference. It's similar to landmark. I'm afraid it's hard to say what went wrong with your pipeline. It could be that the hyperparameters are not well tuned, the quality of the crop dataset is not high, …, so many places where things may go wrong.</p> |
Happywhale - Whale and Dolphin Identification | 55th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Happywhale - Whale and Dolphin Identification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you for your questions.</p>
<p>Yes. SparseCategoricalFocalLoss is one from focal_loss.<br>
We used <a href="https://focal-loss.readthedocs.io/en/latest/generated/focal_loss.SparseCategoricalFocalLoss.html" target="_blank">focal_loss</a> library. </p>
<p>I'm sorry to cause your misunderstanding, because we got one embedding for each image by taking an average of 4 types of embeddings by TTA. We also tried getting 4 embeddings by TTA, but it did not improve our score.</p>
<p>Congratulations to all the winners. Thanks to Kaggle for hosting such a great competition.<br>
This is our first time to get medal at Kaggle.</p>
<p>Here is our solution.</p>
<h1>Dataset</h1>
<p>We used fullbody and backfin dataset created by <a href="https://www.kaggle.com/jpbremer" target="_blank">@jpbremer</a>. We thank <a href="https://www.kaggle.com/jpbremer" target="_blank">@jpbremer</a> a lot, because these dataset contributed to boost our score.</p>
<h1>Models</h1>
<p>We trained 5 types of model with Arcface.</p>
<ul>
<li>Efficient-netV1B5</li>
<li>Efficient-netV1B6</li>
<li>Efficient-netV1B7</li>
<li>Efficient-netV2L</li>
<li>ConvNext</li>
</ul>
<h1>Training</h1>
<h2>Augmentation</h2>
<ul>
<li>Random Rotate(-10 to 10 degree)</li>
<li>Random hue</li>
<li>Random contrast</li>
<li>Random brightness</li>
</ul>
<h2>Training settings</h2>
<ul>
<li>batch_size = 32</li>
<li>image_size = 700</li>
<li>Optimizer = Adam</li>
<li>loss = SparseCategoricalFocalLoss (γ=1)</li>
<li>embedding size = 2048</li>
<li>Batch Normalization before Dropout</li>
</ul>
<h2>How to train models</h2>
<p>We trained each models by 2 steps in each dataset. We used all train data (no validation data) for training models.</p>
<ul>
<li>1st step <br>
We trained models with all augmentation for 30 epochs.</li>
<li>2nd step <br>
We trained models without Random Rotate augmentation for 25 epochs (In the case of Efficient-netV2L and ConvNext, 20 epochs).</li>
</ul>
<p>Finally, we got 10 models (5 models for 2 dataset).</p>
<h1>Postprocess</h1>
<p>We used trained models to get embeddings of training dataset and test dataset.</p>
<ul>
<li><p>TTA(Test Time Augmentation) <br>
We used TTA to get embeddings from each trained models. We input 4 types of image into a trained model, and calculate the average of embeddings.</p>
<ul>
<li>normal image</li>
<li>flip left right image</li>
<li>-10 degree rotate</li>
<li>10 degree rotate</li></ul></li>
<li><p>L2 normalization <br>
We calculated L2 normalization.</p></li>
<li><p>Concat embeddings <br>
We concatenated all embeddings in each dataset.</p></li>
<li><p>K-NN <br>
In fullbody dataset and backfin dataset, we calculated distance between training embeddings and test embeddings, and got top 100 (k=100) individual ids for each test ids.</p></li>
<li><p>Ensamble </p></li>
</ul>
<ol>
<li>We calculated top 100 individuals confidence (confidence = 1 - distance) in each test ids</li>
<li>We added backfin's confidences to fullbody's confidences.</li>
<li>We inserted new_individual with a fixed threshold (threshold=0.5).</li>
<li>We sorted confidences and extract top 5 individuals.</li>
</ol>
<h1>What did not work</h1>
<h2>Models</h2>
<ul>
<li>ViT</li>
<li>BEiT</li>
<li>DeiT</li>
<li>Swin Transformer(swin_large_384)</li>
<li>Efficient-netV2XL</li>
</ul>
<h2>Training settings</h2>
<ul>
<li>GeMpooling</li>
<li>CosFace</li>
<li>AdaCos</li>
<li>SparseCategoricalCrossentropy</li>
<li>Flatten Layer</li>
<li>RMSprop</li>
</ul>
<h2>Augmentaion</h2>
<ul>
<li>-90 to 90 degree randomrotate augmentation</li>
<li>Cutout augmentation</li>
<li>Mix stochastically YOLOv5 and Detic in happywhale-tfrecords-bb</li>
<li>Random Crop</li>
</ul>
<h2>Postprocess</h2>
<ul>
<li>ZCA</li>
<li>query expansion(αQE)</li>
</ul>
<h1>Computational resouces</h1>
<ul>
<li>Google Colab Pro +</li>
</ul>
<h1>Acknowkedgement</h1>
<p>Thank you <a href="https://www.kaggle.com/youtaro" target="_blank">@youtaro</a> <a href="https://www.kaggle.com/shoheishimatani" target="_blank">@shoheishimatani</a> <a href="https://www.kaggle.com/room208" target="_blank">@room208</a> <a href="https://www.kaggle.com/aprilsixteen" target="_blank">@aprilsixteen</a> for this competition!</p> |
Happywhale - Whale and Dolphin Identification | 6th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Happywhale - Whale and Dolphin Identification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>score above threshold. Finally, about 2,0000 more were added.</p>
<p>Because of the large size, I divided the data into 7 kaggle datasets per type of data. The variations are two types of fullbody and backfin data each, and multiple types of box sizes. To save training time as much as possible, I created pre-cropped data. Of course, this contributed greatly to the accuracy.</p>
<p>Yes, And I treated test which is close to train as a new train.</p>
<p>Thanks to you and cats:)</p>
<p>Congrats to all the winners.<br>
Thanks to Kaggle and the hosting team for an interesting competition.<br>
I look forward to the third one:)</p>
<p>Here is my solution summary.</p>
<h1>Dataset</h1>
<p>I used the full body and back fin data created by Jan. And I also used the results of training the detector using Jan's annotations. there were two different boxes for each fullbody / backfin.<br>
I also used data with a slightly larger box. As a result, a fairly diverse set of data was used as material for the ensemble. And full images were also used as material for the ensemble too.<br>
Image sizes: 512 ~ 896.</p>
<h1>validation</h1>
<p>Since I started working with tensorflow late and was short on time, I took the approach of training with all trains and checking scores in LB.<br>
Development without oof was quite difficult, but this time I thought it would not be a problem because the shake seemed to be quite small.</p>
<h1>Model</h1>
<h3>tensorflow</h3>
<p>All models are connected to dolg and arcface.<br>
Dynamic margins were equally accurate with or without. Both were used.</p>
<ul>
<li>efficientnet v1: 5 / 6 / 7 / l2</li>
<li>efficientnet v2: l / xl</li>
<li>convnext: l / xl<br>
Since I started using tensorflow in April, I ended up using the augmentation and hyperparameters as they are in the public notebook.</li>
</ul>
<h3>pytorch</h3>
<p>All models are connected to arcface.(without dolg, without dynamic margins)</p>
<ul>
<li>convnext :xl</li>
<li>efficientnet: l2</li>
<li>swintransformer: large384 (image size was 768)<br>
I used a fairly heavy augmentation.</li>
</ul>
<h1>Inference</h1>
<p>I compared the similarity of the concated feature map between train and test.<br>
The dimension of the final feature map exceeded 20,000.<br>
Different thresholds were used to determine new individual id for each species.<br>
no pp.</p>
<h1>iterative pseudo labeling</h1>
<p>By using pseudo labeling, I can see not only the train but also the similarity to the confident test set. This is why pseudo labeling is important in this competition. So by repeating pseudo labeling multiple times, I was able to improve the score little by little.</p>
<h1>cat cafe</h1>
<p>Since this was an 'animal competition', working in a cat cafe greatly improved my score.<br>
<img src="https://user-images.githubusercontent.com/28746788/163918410-ae3f65e9-f620-4006-abb5-403a448e73f3.png" alt="cat.png"></p>
<p>The dieter implementation is helpful.</p>
<p>1, The dieter implementation is helpful.<br>
2, About 0.5%~1%.</p>
<p>Thank you:)</p>
<p>Thanks!! 🐈<br>
I used multiple colab pro+ and kaggle TPUs. 🐈🐈</p>
<p>Thanks, Qishen!<br>
GM is just a passing point, but I am still happy that I am so close to achieving it.<br>
I will try my best to be like you one day!</p> |
Happywhale - Whale and Dolphin Identification | 83rd – 4 places away from Silver | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Happywhale - Whale and Dolphin Identification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><em>First, let me thank the Kaggle staff for organizing this great competition.</em><br>
<em>Congratulations to all the winners. It is a super valuable and educative experience!</em></p>
<p>Initially, I wasn't going to write this post because my solution is not as strong and thoughtful as other high-rank teams proposed. But, I thought that it would be interesting and knowledgeable for novice competitors like me. BTW, it can give some insights into what actions you may take to reach the bronze medal. The solution described here ranked me 83rd with a Bronze medal that is 4 places away from the Silver model after LB clean up.</p>
<h2>Dataset</h2>
<p>Super valuable cropped datasets are provided by <a href="https://www.kaggle.com/jpbremer" target="_blank">@jpbremer</a> which is backfin and fullbody. These two datasets served as a basis for my final submission.</p>
<h2>Model</h2>
<p>Due to limited time, I ended up only with Arcface. Arcface margin = 0.3 is chosen.<br>
When we search nearest neighbors, concatenated embeddings from fullbody models and backfin models are used.<br>
Following models are used as an essemble in the final submission:<br>
<strong>Backfin dataset</strong></p>
<ul>
<li>image size 544</li>
<li>Efficientnet 5</li>
<li>batch size 128</li>
</ul>
<hr>
<ul>
<li>backfin dataset</li>
<li>image size 640</li>
<li>Efficientnet 6</li>
<li>batch size 64</li>
</ul>
<p><strong>Fullbody dataset</strong></p>
<ul>
<li>image size 576</li>
<li>Efficientnet 5</li>
<li>batch 128</li>
</ul>
<hr>
<ul>
<li>image size 640</li>
<li>Efficientnet 6</li>
<li>batch 64</li>
</ul>
<hr>
<ul>
<li>image size 608</li>
<li>Efficientnet 7</li>
<li>batch 64</li>
</ul>
<p><strong>Trained only with beluga images from fullbody</strong></p>
<ul>
<li>image size 704</li>
<li>Efficientnet 5</li>
<li>batch size 64</li>
</ul>
<h2>Augmentation</h2>
<p>Slightly different augmentations are used for backfin and fullbody. They are the following:<br>
<strong>Backfin</strong></p>
<pre><code>Flip left-right
Rotation +/- 10 deg
Gray 0.6
Brightness +/- 0.15
Hue 0.05
Saturation 0.50 – 1.50
Contrast 0.55 – 1.45
Shift vertical=0.02, horizontal=0.07
</code></pre>
<p>The only changes to fullbody are:</p>
<pre><code>Rotation +/- 8 deg
No gray
Brightness +/- 0.10
Hue 0.02
No shift
</code></pre>
<h2>TTA</h2>
<p>It did not work well here. In this case, the only augmentation that was used is horizontal flip.</p>
<h2>Pseudo Label</h2>
<p>It did not work for me for some reason. I see that many high-rank teams use it and it boosted the score. Something is wrong on my side.</p>
<h2>Thresholding</h2>
<p>To predict a <code>new_individual_id</code>, search for the maximum threshold by the predicted top5-species.</p>
<h2>Ensemble</h2>
<p>I concatenated different model target confidences with a sum strategy and sorted them in descending order which gave a good boost to my solution. </p>
<p><code>Final LB Public 0.829, private 0.792</code></p>
<p>Let me know if you have any questions!</p> |
Happywhale - Whale and Dolphin Identification | 8th place solution [My part] | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Happywhale - Whale and Dolphin Identification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><strong>My key features</strong> in importance order:</p>
<ol>
<li>A lot of GPUs (or TPUs)</li>
<li>Custom detector</li>
<li>Pseudo labeling</li>
<li>Big input image size</li>
<li>Invariant loss part from <a href="https://arxiv.org/abs/2101.05419" target="_blank">DAIL -- Dataset-Aware and Invariant Learning for
Face Recognition</a></li>
</ol>
<p><strong>Custom detector</strong><br>
I've labeled by hand 1k train images, train Yolo, verify by hand 3k images and train final result with 4k labeled images. There are two classes: dorsal fin and full body. Detector was trained very well with perfect quality, I have few hundreds images without boxes, usually it is images under water or tails.</p>
<p><strong>DAIL Invariant loss part</strong><br>
The idea -- we have two datasets: dorsal fins and bodies, it give us more data than train (80k+ vs 50k+ images). Let's train it together with kind of different heads: one head for fins and one for bodies. Also, It increases train time 1.5x.</p>
<p><strong>Scores</strong><br>
Best solo model - 799<br>
Best solo model with pseudo data - 852<br>
Ensemble score (concat) - 859<br>
Ensemble with teammates - 872<br>
Different threshold for new individual based on species - best solution (884)</p>
<p><strong>Pseudo</strong><br>
I have two iterations, from submit ~840 I took 60% top predictions, got around 830 solo model score. The second iteration after team merge, from submut ~860 I took 70% top predictions (around 15k image).</p>
<p><strong>Train details</strong><br>
Best backbones: dm_nfnet_f6, image size 576 and tf_efficientnet_l2_ns, image size 800 (from timm)<br>
Embedding size: 4096<br>
Loss: AMSoftmax aka CosFace (no different in score with ArcFace), m=0.35 and s=25-30<br>
LR Scheduler: CosineAnnealingLR with SGD<br>
Epochs: 20-30<br>
Augmentation: Horizontal flip, blur; increasing amount of augmentation decreased my metrics</p>
<p>Ensemble - 4 models, two different backbones and two scales (25 and 30)<br>
I've train on whole data without folds</p>
<p><strong>Species classification</strong><br>
Our last big improve -- thresholds based on species, I've trained it at start of competition, <a href="https://www.kaggle.com/code/kwentar/species-classification" target="_blank">train notebook</a>, <a href="https://www.kaggle.com/datasets/kwentar/happywhale-test-species" target="_blank">dataset</a></p>
<p><strong>What didn't work</strong></p>
<ol>
<li>GeM and other poolings, average best for me</li>
<li>Model on individual species</li>
</ol>
<p>Thank you all and especially my teammates <a href="https://www.kaggle.com/olegshapovalov" target="_blank">@olegshapovalov</a>, <a href="https://www.kaggle.com/lenny27" target="_blank">@lenny27</a>, <a href="https://www.kaggle.com/ilyadobrynin" target="_blank">@ilyadobrynin</a> for this competition, that was a great three months!</p>
<p>Yeap, pseudo is important here. I have two iterations, from submit like 840 I took 60% top predictions, got around 830. The second iteration after team merge, from submut 860+- I took 70% top predictions (around 15k image).</p>
<p>Kind of, yes, but no 26, we started from 2 thresholds: one for dolphins and one for whales, it improves score (872->879), then try to tune threshold for biggest species (beluga, blue whale, etc). It kind of LB overfit, but here it works.</p>
<p>Oh, my mistake, these thresholds used for decision new individual or not, if similarity > threshold -> predict top 1 id from train else new_individual on top 1 place</p>
<p>Yes, I've tried from 512 to 8192, 4096 has better result than others, the lower the worse</p>
<p>What exactly you interested in? I've created classification model for species prediction and predict test data species, we used it for species threshold. Train notebook you can see here <a href="https://www.kaggle.com/code/kwentar/species-classification" target="_blank">https://www.kaggle.com/code/kwentar/species-classification</a>, it is straightforward image classification, nothing special</p>
<p>I've lost the cv results for this part, but it is quite significant, btw my teammates have different embedding size, I guess here is no one good solution</p> |
Happywhale - Whale and Dolphin Identification | DOLG Army: 7th Place Solution Summary | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Happywhale - Whale and Dolphin Identification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi all, </p>
<p>Congratulations to all the winners. And thank you to kaggle host and all participants for this exciting competition.It was as always very tough last week with huge LB changing everybody , fortunately for us , our ideas worked and we ended on right side of the private LB.</p>
<p>Congratulations to <a href="https://www.kaggle.com/ks2019" target="_blank">@ks2019</a> for becoming Kaggle Competitions GM. Thanks a lot to my teammates <a href="https://www.kaggle.com/nischaydnk" target="_blank">@nischaydnk</a> <a href="https://www.kaggle.com/tanulsingh077" target="_blank">@tanulsingh077</a> <a href="https://www.kaggle.com/navjotbansal" target="_blank">@navjotbansal</a>. Such a great team effort 🤜.</p>
<p>Our solution consists of the following major components:</p>
<ol>
<li>Data Recipe</li>
<li>Modelling</li>
<li>Progressive Psuedo Labelling</li>
<li>PostProcessing</li>
<li>Ensemble</li>
</ol>
<p>All our models are trained on TPUs using more or less the same tensorflow pipeline that KS shared in the beginning of the competition.</p>
<h2>Data Recipe</h2>
<p>There were different datasets available publically, which we thought could help us in diversity. Inspired from <a href="https://www.kaggle.com/thedrcat" target="_blank">@thedrcat</a> 's solution in Chaii, we decided to make our own data recipe. We used the following combination of datasets to train our models:</p>
<ul>
<li>Fullbody Annotations</li>
<li>Fullbody Annotations + Backfins concatenated horizontally </li>
<li>Fullbody Annotations + Original Images concatenated horizontally</li>
</ul>
<p>Fullbody Annotations<br>
<a href="https://ibb.co/LPtzkGQ"><img src="https://i.ibb.co/KGqj01N/fullbody.png" alt="fullbody"></a><br><a target="_blank" href="https://imgbb.com/"></a><br></p>
<p>Fullbody Annotations + Backfins concatenated horizontally <br>
<a href="https://ibb.co/kH7CW6G"><img src="https://i.ibb.co/r7jr1fQ/fullbody-backfin.png" alt="fullbody-backfin"></a><br><a target="_blank" href="https://imgbb.com/"></a><br></p>
<p>Fullbody Annotations + Original Images concatenated horizontally<br>
<a href="https://ibb.co/mz25mhN"><img src="https://i.ibb.co/dKV4Skm/fullbody-original.png" alt="fullbody-original"></a><br><a target="_blank" href="https://imgbb.com/"></a><br></p>
<p>We used simple augmentations:</p>
<ul>
<li>horizontal flip</li>
<li>random pixel based augmentation (brightness, contrast, HSV)</li>
<li>cutout</li>
</ul>
<h2>Modelling</h2>
<p>We used a combination of DOLG (with EFFNet backbone) and normal EFFNets with CurricularFace loss. We used <a href="https://www.kaggle.com/christofhenkel" target="_blank">@christofhenkel</a> 's implementation of DOLG model, <a href="https://www.kaggle.com/competitions/landmark-retrieval-2021/discussion/277099" target="_blank">implemented in PyTorch</a> which we ported to TensorFlow because it worked better than the public implementation in this competition. We also used multiple heads in all the models: one for species classification and another for individual classification. Species classification head was trained with normal softmax loss while the individual classification head was trained with CurricularFace loss.</p>
<p>Models in our final submission:</p>
<ul>
<li>DOLG B5/B6/B7, image_sizes: (786, 786x2), (896, 896x2), dataset: Fullbody Annotations + Backfins</li>
<li>DOLG B6/B7, image_sizes: (896, 896x2), dataset: Fullbody Annotations + Original Images</li>
<li>DOLG B5/B6/B7, image_sizes: (1024, 1024), dataset: Fullbody Annotations</li>
<li>EFFNet B5/B6/B7, image_sizes: (786, 786x2), (896, 896x2), dataset: Fullbody Annotations + Backfins</li>
<li>EFFNet B5/B6/B7, image_sizes: (1024, 1024), dataset: Fullbody Annotations</li>
</ul>
<p>All the models were trained on psuedo labelled data from our best ensemble. During inference we also use hflip as TTA.</p>
<h2>PostProcessing</h2>
<p>We used a second stage model for getting better confidence scores which is less susceptible to threshold changes. The idea was to use predictions/embeddings/features to get a better confidence score than just using nearest distances. After candidate generation (top100 neighbors based on knn distances) we engineered the following features for our 2nd stage model (features were generated from the models stated above):</p>
<ul>
<li>species probabilites for each image_id</li>
<li>top3 nearest distances for each (image_id, unique individual_id) pair present in the candidates</li>
<li>distance of each image_id from centroid of each unique individual_id present in the candidates</li>
<li>rank of each unique individual_id present in the candidates</li>
<li>sum of top3 neighbor distances </li>
<li>OOF predictions</li>
</ul>
<p>We trained a 5 folds XGB and LightGBM models on the above features and used a ensemble of their predicitons as final confidence scores.</p>
<h2>Ensemble</h2>
<p>We used simple weighted voting approach using the confidence scores obtained from above where the weights were optimized using 5 fold OOFs.</p>
<h2>Acknowledgements</h2>
<ul>
<li>Thanks a lot to <a href="https://www.kaggle.com/jpbremer" target="_blank">@jpbremer</a> for sharing the datasets (The real hero 🔥)</li>
<li>Special mention to Google TPU Research Program (<a href="https://sites.research.google/trc/" target="_blank">https://sites.research.google/trc/</a>) which helped us in training bigger models in the last few days of the competition by supporting us with V3 TPUs on GCP.</li>
</ul> |
Happywhale - Whale and Dolphin Identification | Hierarchical learning solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Happywhale - Whale and Dolphin Identification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I would like to say congratulations to all the winners!🎉🔥 You did a good job with such a complex dataset!</p>
<p>Although I didn’t do well in this competition (4 places to bronze😩), I would like to share my solution. I think this information may be useful for further competitions.</p>
<p><strong>Architecture</strong><br>
At the beginning of the competition I was interested in finding neural net architectures, which outperform the baseline effnet. I was sure that there are better architectures (Well, in the end I found that the differences are not so drastic). I stopped at the effnetv2_m/effnet_b6/convneXt_m in my final ensemble.<br>
When I saw the data the first idea came to my mind was, that it is a hierarchical learning competition. After some search I found this interesting article: <strong><a href="https://arxiv.org/pdf/2104.13643.pdf" target="_blank">Your "Flamingo" is My "Bird": Fine-Grained, or Not</a></strong>. This article is applied to a classification problem, but I suggested it will work well in re-identification either (I modified an article approach a little making the individual embedding part larger than species). The final architecture of my solution you can see in the picture below.<br>
<a href="https://postimg.cc/4Y5FhwRD" target="_blank"><img src="https://i.postimg.cc/bvcXWFCw/arch1.png" alt="arch1.png"></a></p>
<p><strong>Augmentations</strong><br>
2.1 <strong>Shear scale rotate</strong><br>
From the dataset you can see that there are a lot of different viewpoints for one whale. Using this augmentation I tried to simulate them.<br>
2.2 <strong>CLAHE</strong> at test and train time before all augmentation<br>
The purpose of using this augmentation is that some back fins are under water, some are darkened. <br>
2.3 <strong>Local grayscale transformation</strong><br>
This transformation is based on this <a href="https://arxiv.org/pdf/2101.08533.pdf" target="_blank">article</a>, which gave a little boost to LB<br>
2.4 <strong>Color/brightness/hue/horizontal flip</strong></p>
<p><strong>Dataset</strong><br>
For all novice kagglers I would like to say that <strong>dataset is crucially important</strong>. I didn’t pay a lot of attention to it, but when I switched from detic to backfins dataset, the score was boosted a lot. </p>
<p><strong>Loss</strong><br>
A lot of people used ArcFace over CosFace, but in my experiments CosFace performed better. I used <a href="https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123560715.pdf" target="_blank">subcenter CosFace</a>, because of the noise in the dataset. Also I followed dynamic margins as in the top solutions of Google Landmark competition</p>
<p><strong>Thing that didn’t work 🙁</strong><br>
Experiments with triplet loss. I tried several approaches:</p>
<ol>
<li>Simple hard triplet loss</li>
<li>Triplet mining the following way. In the batch there was the same proportion of whales and dolphins. </li>
<li><a href="https://arxiv.org/pdf/2104.13643.pdf" target="_blank"> Centroid triplet loss </a><br>
All this experiments performed worse than cosface</li>
</ol>
<p>👀<strong>Summary of mentioned articles:</strong><br>
[1] Your "Flamingo" is My "Bird": Fine-Grained, or Not <a href="https://arxiv.org/pdf/2104.13643.pdf" target="_blank">https://arxiv.org/pdf/2104.13643.pdf</a><br>
[2] Sub-center ArcFace: Boosting Face Recognition by Large-scale Noisy Web Faces <a href="https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123560715.pdf" target="_blank">https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123560715.pdf</a><br>
[3] A general multi-modal data learning method for Person Re-identification<br>
<a href="https://arxiv.org/pdf/2101.08533.pdf" target="_blank">https://arxiv.org/pdf/2101.08533.pdf</a><br>
[4] On the Unreasonable Effectiveness of Centroids in Image Retrieval<br>
<a href="https://arxiv.org/pdf/2104.13643.pdf" target="_blank">https://arxiv.org/pdf/2104.13643.pdf</a></p> |
Happywhale - Whale and Dolphin Identification | My solution - 29th rank [part] | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Happywhale - Whale and Dolphin Identification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Rank: 34 </p>
<p><strong>Image Size</strong></p>
<table>
<thead>
<tr>
<th>Model</th>
<th>Train Image size</th>
<th>Infer Image size</th>
</tr>
</thead>
<tbody>
<tr>
<td>B5</td>
<td>960</td>
<td>1056</td>
</tr>
<tr>
<td>B6</td>
<td>768</td>
<td>840</td>
</tr>
<tr>
<td>B7</td>
<td>600</td>
<td>660</td>
</tr>
</tbody>
</table>
<p><strong>Augmentations:</strong></p>
<pre><code> image = tf.image.random_flip_left_right(image)
image = tf.image.random_hue(image, 0.1)
image = tf.image.random_saturation(image, 0.80, 1.20)
image = tf.image.random_contrast(image, 0.80, 1.20)
image = tf.image.random_brightness(image, 0.10)
</code></pre>
<p><strong>Architecture</strong></p>
<p>Encoder {B5/B6/B7} -> GAP {Global Avg pooling} -> Batchnorm -> Multi-SampleDropout </p>
<p>The output of Multi-SampleDropout is fed into two Arcface Classification heads <br>
1. Individual Classification head,<br>
2. Species Classification head</p>
<pre><code>individual_margin = head(n_classes = config.N_CLASSES, s = 30, m = 0.3, name=f'head_individual/{config.head}', dtype='float32')
species_margin = head(n_classes = config.N_SPECIES, s = 30, m = 0.3, name=f'head_species/{config.head}', dtype='float32')
</code></pre>
<p><strong>Training Settings</strong></p>
<ol>
<li>40 epochs for first stage</li>
<li>10 epochs with pseudo labels<br>
<code>Use multiple models, find top-1 examples that the majority agree on, ensuring 0.2 separation between confidence of 1st class and 2nd class. Generated 12157 pseudo labels</code></li>
<li>Embedding dimension: 4096</li>
<li>Freeze Batchnorm</li>
<li>Use 4 different datasets<br>
a. Full body<br>
b. Tracer + Full body {Use tracer for most cases. if tracer bbox not present, use fullbody bbox}<br>
c. backfin {all the bbox}<br>
d. backfin {bbox after cleanup as suggested by the <a href="https://www.kaggle.com/bre" target="_blank">@bre</a>}</li>
<li>Lookahead + Adam</li>
</ol>
<p><strong>Thanks</strong><br>
Thanks to datasets contributors<br>
<a href="https://www.kaggle.com/adnanpen" target="_blank">@adnanpen</a> <br>
<a href="https://www.kaggle.com/jpbremer" target="_blank">@jpbremer</a> </p>
<p>Teammates<br>
<a href="https://www.kaggle.com/lextoumbourou" target="_blank">@lextoumbourou</a> <br>
<a href="https://www.kaggle.com/zekunn" target="_blank">@zekunn</a> <br>
<a href="https://www.kaggle.com/poorneshwaran" target="_blank">@poorneshwaran</a> </p>
<p>We were able to get a good ranking due to good support and hard work from my teammates. Thanks a lot. Let's catch up on some other challenges and do better than this.</p>
<p><strong>Notes</strong><br>
Check out the notebook shared by <a href="https://www.kaggle.com/lextoumbourou" target="_blank">@lextoumbourou</a><br>
<a href="https://www.kaggle.com/code/lextoumbourou/happywhale-tpu-baseline-to-0-804-elasticface/notebook" target="_blank">https://www.kaggle.com/code/lextoumbourou/happywhale-tpu-baseline-to-0-804-elasticface/notebook</a></p>
<p><a href="https://www.kaggle.com/leonshangguan" target="_blank">@leonshangguan</a> </p>
<p>You can find the augmentations that I used. Changing hue was one of the main boosters. <br>
Introduced the Batch norm and Multi-sample drop out before the classification heads.</p>
<p>Colab Pro. I reduced batch size a lot</p> |
Happywhale - Whale and Dolphin Identification | Solo silver - minimum resources solution with heavy postprocessing | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Happywhale - Whale and Dolphin Identification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>It was an amazing but tough competition especially for those who went solo. <br>
HW : Kaggle TPU for models' weights, Kaggle GPU for embeddings and nearest neighbours calculations, home mostly CPU for final postprocessing<br>
Used public datasets: backfintfrecords by <a href="https://www.kaggle.com/jpbremer" target="_blank">@jpbremer</a>, happywhale-tfrecords-fullbody-768 and happywhale-tfrecords-backfin-768 by <a href="https://www.kaggle.com/ragner123" target="_blank">@ragner123</a> - big thanks to both the authors, before all to <a href="https://www.kaggle.com/jpbremer" target="_blank">@jpbremer</a> for all the annotations!<br>
Private dataset for species classification.<br>
Models : effnetv1_b7 with ArcFace loss, image resolution: 768, batch size: 64 and 80, basic little bit tuned augmentation.</p>
<p><strong>Postprocessing</strong><br>
Training models and calculating embeddings meant for me 5 weeks of suffering with tensorflow. This fact together with different folds' distributions and species/individual_id encodings for used datasets led me to the decision to use only <strong>nearest neighbours dataframes</strong> calculated for each single model (fold), not concatenating directly the embeddings.</p>
<p>I divided the test images into 4 categories based on the species predictions and the species body characteristics (before all fins). For each category I took a different set of NN DFs, limited them only to images from the particular category, concatenated them, grouped by [image, target] pairs, calculated mean and count of present predictions. I left only pairs with high enough count of presence (e.g. 6 of 7), set the threshold for new_individual based on the category and calculated the appropriate part of the final submission. In the end I concatenated the 4 parts of the submission together.</p>
<p>What I tried and was only lost time: training inside groups of species (e.g. no-fin separately) didn't work for me<br>
What could have been done better:<br>
Nearly everything about the models - tuning parameters, trying other backbones …<br>
What I regret most, that I didn't have time to realize my original idea to create a bigger training set with augmented images of the individual_ids with less than 3 images. This could be the key to better results.</p>
<p>My big thanks and deepest respect go to the host organization Happywhale and to all their contributors for preparing such a great dataset and for spreading awareness about the world of these magnificent creatures.<br>
Congrats and thanks to all those who created and presented their great solutions, the host will have really something to choose from.</p> |
Helping Santa's Helpers | How I got my run time down from 10 hours to 10 minutes | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Helping Santa's Helpers
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Earlier in this forum there was a post about run time to process the 10 million toys. When I read it I was shocked to see that there were people in there talking about run times measured in minutes, because at the time my python solution implementation was taking 10-12 hours to run. However since I stumbled on how to reduce the run time significantly, I wanted to share for anyone else interested</p>
<p>At the time I was exploring 2 different solutions, one was algorithmic, and one was an evolved solution, and I was trying to get the evolved solution to run a lot faster. Both solutions involved reading in all 10 million toys, or as many as my computers RAM could hold without breaking. I played around with various code changes such as changing the get sanctioned, unsanctioned hours breakdown, which was counting the minutes 1 at a time, to an algorithm, and using faster ways of slicing my lists, but my runs were still hours.</p>
<p>The real thing that sped up my runs was changing my data structure. Instead of reading in all 10 million toys and processing them, I instead made a histogram of the toys. i.e. there were ~ 100,000 one minute toys and ~90,000 2 minute toys etc. So I could store value 100,000 in list index [1] and 90,000 in list index [2] and process that. Suddenly instead of 10 million unique values, my computer was dealing with around 25 thousand distinct values which only took a couple of minutes, and then several more minutes to go in an back substitute actual toys for their durations</p>
<p>This didn't take into account relative start times, so I just started them all at year 2015, giving up a couple of months in exchange for easier coding</p>
<p>I should caveat to say that I did eventually abandon this solution in favor of my teammate's implementation which took longer to run, but for other reasons gave a better solution and was better structured to roll in improvements.</p>
<p>For others who had shorter runtimes, did you implement a similar solution? Or was there another way to speed up the code ?</p> |
Helping Santa's Helpers | Solution of team "Master Exploder" | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Helping Santa's Helpers
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>As <a href="http://www.kaggle.com/c/helping-santas-helpers/forums/t/11472/our-approach">promised</a>, with some delay though, we provide full description and sources for the solution of our team "Master Exploder" (with Marcin Mucha).</p> |
Herbarium 2020 - FGVC7 | 10th place: instant collaboration | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Herbarium 2020 - FGVC7
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Yes, I've missed it.
Added link to the text of my post</p>
<p>Oh, yeah.
And here's the my code
<a href="https://github.com/zetyquickly/herbarium-fgvc7-2020/blob/master/DESCRIPTION.md">https://github.com/zetyquickly/herbarium-fgvc7-2020/blob/master/DESCRIPTION.md</a></p>
<p>Hi, community! </p>
<p>First of all I want to say thanks to organizers and participants for such interesting competition.</p>
<p>Here I'll present the details of our model. This model is weighted ensemble of predictions of three. First two introduces my teammate, the third one me.</p>
<p>> I am Len(Leheng) Li, i introduce the part of network and my team mate Emil introduce model ensemble.
Our main work is based on <a href="https://arxiv.org/pdf/1912.02413.pdf">BBN</a>(CVPR2020 oral, <a href="https://github.com/Megvii-Nanjing/BBN">code</a>), it's main contribution is dynamically paying attention to the tailed data. we than replace resnet50 with resnest101 and Efficient net b3.
<img src="https://i.ibb.co/nfYTxVn/1.png" width="800">
For resnest101, we first center-crop 650*650 image and resize to 256*256. It got 0.61094.
For Efficient net b3, we first center-crop 650*650 image and resize to 300*300. It got 0.60282.
I am using 4*2080ti and 1*v100 16g, which is a drop in the bucket for this challenge. I only train my model in 20 epochs. I think performance will higher if I train more times.</p>
<p>> At last, many thanks to my team mate Emil, it’s the first time i collaborate with foreign people(China-Russia). I really enjoy the experience with Emil. And thanks for organizer host this challenge.</p>
<p>And my part:</p>
<p>> ## 1. Model description
I've tried to squeeze from se_resnext50_32x4d feature generator as much as possible. The main source of ideas was this A Bag of Freebies <a href="https://youtu.be/iRrDwzcxg08">video</a>. There were 32k of classes that hierarchically arranged in superclasses with smaller number of distinct labels. In my model I've pursued them all with different weights before loss. The loss was a weighted FocalLoss from this paper. I've analyzed in detail how much that improved performance, but it seems obvious that one should try to weight such unbalanced data. As info says:</p>
<p>> Each category has at least 1 instance in both the training and test datasets. Note that the test set distribution is slightly different from the training set distribution. The training set contains species with hundreds of examples, but the test set has the number of examples per species capped at a maximum of 10.</p>
<p>> I've collected statistics of mean and variance of whole dataset to normalize input images and that has given a little boost. The next improvement I've introduced was decaying optimizer learning rate. It starts from 30e-5 and divided by two each time every 5 or 10 steps. I've used Adam optimizator. There was an attempt to switch oprimizators during training from Adam to SGD and vice versa. I called this paradigm AdamEva optimizer. The attempt crashed against the constraints of pytorch-lightning framework that allow you to use several optimizers but each optimizer has it's own train call.</p>
<p>> Finally, about augmentations and TTA. Random crop, random flip of herbarium image improves performance but not a Color Jitter, that's my insight. For the TTA I've sampled 4 inputs. I've done 3 times train augmentation than 1 more time just normalization and resize. After that logits of these four samples have averaged to produce output logits.</p>
<p>> ## 2. What also hasn't work
> * Metric Learning
I've not figured out how to do metric learning for such a big dataset with such amount of unique labels. I've engineered how to store this amount of data but haven't engineered a way to use it</p>
<p>> * Descriminator Network
The main idea is to train a network to predict whether image from train or test set. This might help to create robust validation set as a part of train set. Good validation set gives you an ability to know how good your machine learning model is. Sounds obvious, but it isn't easy to find such when your dataset very disbalanced. By the way, I've used a random subset of a train as val set and it isn't right. In my opinion this is an extreme problem (problem of validation set choosing) to solve in first place in the future!</p>
<p>> ## 3. What went into production (best submissions)
I've mixed our models with two approaches: averaging of probabilities and averaging of logits. The first hasn't worked well while the second showed better results than both of our networks.</p> |
Herbarium 2020 - FGVC7 | 1st place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Herbarium 2020 - FGVC7
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hey <a href="/mrt23564">@mrt23564</a>,
I used the models logits as input to the stacking net. The input shape will be [batch size, 1, num models, 32093]. In my case, num models = 5.
I tried setting the num_channels to 32 and 64 first and found out that the stacking net couldn't overfit the mini-train set probably due to it having too few parameters. Increasing to 128, 256 or something larger solved this issue. In the end, I went with 128.</p>
<p>Firstly, I would like to thank the host team and Kaggle for hosting such an interesting competition as well as my teammates (<a href="/dattran2346">@dattran2346</a>, <a href="/moewie94">@moewie94</a>) for their hard work and contributions.</p>
<p><strong>Model Architecture</strong></p>
<p>We used EfficientNet B3/4/5 (noisy student version) with original image sizes (320x320, 380x380, 456x456), Inception V4 and SEResNeXt50 with image size 448x448 as our backbones.
For the EfficientNets, we first extracted feature maps from block 4, 5, and 6, forwarded through one-squeeze multi-excitation modules (<a href="https://eccv2018.org/openaccess/content_ECCV_2018/papers/Ming_Sun_Multi-Attention_Multi-Class_Constraint_ECCV_2018_paper.pdf">OSME</a>), before pooling and concatenating them into a "local" features vector. The ArcFace(s=16, m=0.1) loss + BNNeck combination was then used with "head" block features as input to help our models learn better "global" representation.
This pipeline was trained jointly end-to-end with Cross Entropy losses (no label smoothing) and <a href="https://papers.nips.cc/paper/7344-maximum-entropy-fine-grained-classification.pdf">entropy losses</a>; at inference time, we averaged the 3 classifiers' logits (local classifier, ArcFace and BNNeck).</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1134433%2Fb488481298423a5b4b3ecdbf549b1ad2%2Fmodel.png?generation=1590638315354457&alt=media" alt=""></p>
<p><strong>Training</strong></p>
<p>We employed a two-stage training which is similar to 4th ranked team. In the first stage, only samples from the top 5000 most frequent categories were used. After obtaining a good model (measured in terms of validation f1-score), we employed the weight imprinting trick to initialize the classifiers' weights for the remaining 27093 categories then re-trained the entire network until convergence. Below are our training configs.
+ Stage 1: 75 epochs, batch size = 2048 (gradient accumulation), cosine scheduler with linear warmup and AdamW optimizer, balanced sampling (randomly sample 75 images per category per epoch).
+ Stage 2: Increase number of epochs to 105, balanced sampling (randomly sample 25 images per category per epoch).
+ Augmentations: <code>ResizeShorterSide(random.randint(img_size + 32, img_size + 128)), RandomCrop(img_size), RandAugment(num_ops=2, magnitude=random.randint(1, 11),
RandomErasing()</code></p>
<p><strong>Scores</strong>
We reported some single model scores without any post-processing here
+ Inception V4 - Public 0.79712 - Private 0.77781
+ B3 - Public 0.80570 - Private 0.78182
+ B4 - Public 0.83014 - Private 0.81488
+ B5 - Public 0.84774 - Private 0.83309</p>
<p><strong>Ensemble</strong>
We took the code and idea in this<a href="https://www.kaggle.com/mathormad/5th-place-solution-stacking-pipeline"> notebook</a> from last year RSNA challenge, modified them to build a simple 2-layer stacking network to combine predictions from our single models. The stacking ensemble obtained public score 0.85505 and private score 0.84180.
```
KERNEL_SIZE = 9
NUM_CHANNELS = 128</p>
<p>class StackingCNN(nn.Module):
def <strong>init</strong>(self, num_models, num_channels=NUM_CHANNELS):
super(StackingCNN, self).<strong>init</strong>()
self.base = nn.Sequential(OrderedDict([
('conv1', nn.Conv2d(1, num_channels,
kernel_size=(num_models, 1),
padding=(int((num_models - 1) / 2), 0))),
('relu1', nn.ReLU(inplace=True)),
('conv2', nn.Conv2d(num_channels, num_channels,
kernel_size=(1, KERNEL_SIZE), padding=(0, int((KERNEL_SIZE - 1) / 2)))),
('relu2', nn.ReLU(inplace=True))
]))</p>
<pre><code>def forward(self, images, labels=None):
with autocast():
x = self.base(images)
x = x.permute(0, 3, 1, 2)
x = fast_global_avg_pool_2d(x)
if self.training:
losses = {
"avg_loss": F.cross_entropy(x, labels)
}
return losses
else:
outputs = {"logits": x}
return outputs
</code></pre>
<p>```</p>
<p><strong>Postprocessing</strong>
We exploited the fact that the test set has the number of examples per species capped at a maximum of 10. For each species, we first sorted the predictions based on their top 1 confidence scores. For predictions with confidence scores outside top 10, we changed them to top 2 category instead. This simple trick gave pretty consistent boost (0.2 - 0.3 %) on public LB for every single submission file. Our final submission scored 0.85797 on public and 0.84484 on private.</p>
<p>We tried all tricks with a ResNet50 to get quick feedback. Our simplest baseline was just a ResNet50 with ArcFace head. Adding other things on top of ArcFace don't improve the performance of ArcFace head per se; however, we now have extra classifiers' logits to ensemble 😄. I would say that our single model isn't "single" in the purest sense 😄.
In stage 1, we have 75x5000 = 375000 samples per epoch. In stage 2, we have 25 x 32093 = 802k samples per epoch. We train all models for a total of 75+105=180 epochs.
We have a box of 4 2080ti for prototyping + a dgx1 for longer training.</p>
<p>thanks for your words 👍. Your team's TResNet was a very exciting architecture !</p>
<p>I definitely agreed that the stacking cnn was our extra edge to win in this comp.
I constructed a mini-train set (about 80k images) with a similar distribution to the test set (all 32k categories, no more than 10 samples per cat.) to train the stacking cnn. Validation set was the same one as what we used to checkpoint our single models.</p> |
Herbarium 2020 - FGVC7 | 2nd Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Herbarium 2020 - FGVC7
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Yes, only the model used to produce the soft-targets is our current best model (Largest model) or ensemble of multiple models.
Then validated the next single models and sometimes smaller models against the soft-targets</p>
<p>Good question, At first we were using a stratified split of 80/20. but then when we started using class balanced loss weights, we used all of the training data since we want to include the few-shot classes (more than 10k classes have only 2-3 samples per class).
we didn't do k-fold splits since the dataset is relatively large and we didn't feel that it's necessary for this task.</p>
<p><a href="/discoholic">@discoholic</a> before we ask our selves whether we "overfit" or not, first we should understand the metric we are trying to optimize.
"In macro F1 a separate F1 score is <strong>calculated for each species</strong> value and then <strong>averaged</strong>."</p>
<p>what do we know about the test set?
1. We know that each class is represented in the test set, so miss-classifying images of the few-shot classes can gravely reduce our "macro F1" score in the test set.
2. We know that the test set is probably also im-balanced since we already know that the num of samples per class is capped at 10 and there is at least one image per class: 138296 / 32094 < 10, so probably the train set and test set came from the same long-tail distribution (~probably)
3. The test set is ~8 times smaller than the training set</p>
<p>Given this information about the test set, then if we "over-fit" then it's probably because of the frequent classes in the training set are more "dominant" in the test set predictions than the few-shot classes.</p>
<p>If the score was top-1 metric instead of macro f1 score then I would agree that there is a higher risk of "over-fitting" if we train on all of the training set.</p>
<p>To answer your question, we used soft-target predictions of the test set from our best model (ensemble + TTA) and validated our score of individual training against it. and whoever we had a better Top model (with ensemble), we updated the soft-target predictions of the test set and now every run is validated against that (maybe its not ideal but it worked for us).
our internal score and the public LB score had a good correlation, and we saw also a good correlation with the private LB score, our final submission was indeed our best private LB score.</p>
<p>First of all special thanks to my teammate Tal :)</p>
<h2>Backbone:</h2>
<p>As a backbone, we used TResNet architecture (High Performance GPU-Dedicated Architecture) which allowed us to do fast training with large batches, while maintaining top scores along all resolutions (more details in our <a href="https://arxiv.org/pdf/2003.13630.pdf">paper</a> and code <a href="https://github.com/mrT23/TResNet">here</a>).
Models used in this competition: TResNet-M, TResNet-L.</p>
<h2>BottleNeck Head:</h2>
<p>We avoided having a massive classification layer: 2048x32094 (65.72M params), by using a bottleneck head that reduces the embedding feature vector from 2048 dims to 512 dims. total parameters= 2048x512 + 512x32094 (17.48M params).
This modification not only decreased the size of the network, but allowed us to use samples in larger batch sizes (Faster Training and helps the Soft-Triplet loss).
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1578028%2Fd861d950b9f015672631c7b59815e5a5%2FBasicBlcok_Tresnet%20(1" alt="">%20copy.png?generation=1590586933112476&alt=media)</p>
<h2>Loss function</h2>
<p>We used Cross-Entropy loss with label smoothing 0.2, and in addition we added Soft-Triplet Loss with Soft Margin which emphasizes hard examples by focusing of the most difficult positives and negatives samples in the batch, for more details check out our ReID <a href="https://arxiv.org/pdf/1910.07038.pdf">paper</a> Section 3.1.
Online-Hard Negative mining works best when training with large batch size (more negative examples); hence, we leveraged the high memory utilization of our Network and the bottleneck head. for example: with TResNet-M @ 448x448, we could reach maximum batch size of 128 on a single V100 16GB, significantly higher than other common networks like EfficientNet and ResNext.</p>
<h2>Class Balancing</h2>
<p>Herbarium 2020 dataset suffers from prominent class im-balance, therefore we used a simple loss weighting, multiplying by the inverse of class frequency.
Note: We used class balancing only in the last 15~ epochs of the training, to enable the network to learn the basic features first.
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1578028%2F332b1711fa4eefa0b2de9984d491742d%2FScreen%20Shot%202020-05-27%20at%2016.46.38.png?generation=1590587199078597&alt=media" alt=""></p>
<h2>Augmentations</h2>
<p>• Squish instead of Crop
• FlipLR
• Cutout: 0.5</p>
Special augmentation scheme (+~0.5% on public LB):
<ul>
<li>Random Zoom-in (p=0.4, zoom_factor= random between 1-1.15)</li>
<li>Reduced affinity transformations: p_shear: 0.1, p_rotate=0.2, rotate_degrees=10</li>
<li>Decolorize: p_decolorize: 0.4</li>
<li>Rectangular resizing: small gains
<h3>TTA</h3></li>
</ul>
<p>Averaging horizontal flip predictions</p>
<h2>Our best single model:</h2>
<p>Model: TResNet-L</p>
<p><strong>Stage 1:</strong>
100 epochs at 448x448, CE+SoftTriplet (class-balancing starting from epoch 85)
Public LB score (without post-processing): 0.84089
Public LB score (with post-processing P1+P2): 0.85093</p>
<p><strong>Stage 2 (high resolution fine-tuning):</strong>
10 epoch fine-tune + class balancing on 544x416
Public LB score (without post-processing): 0.84727
Public LB score (with post-processing P1+P2): 0.85604</p>
<h2>Post-Process (+~0.9% on the pubic LB)</h2>
<p>The origanizers wrote in the description that:
"Each category has at least 1 instance in both the training and test datasets."</p>
<p>Based on this prior knowledge we used two post-processing algorithms that tries to fill some of the empty classes.</p>
<p><strong>P1: Switching classes based on low top-1 confidence (~0.9% on the public LB)</strong>
the idea is basically whenever the network is not confident about its predictions, and if one of the empty classes is close to top1 prediction, then switch predictions:
1) Go through all the top 2 predictions in the test set and if all these criteria are met switch between top1 and top2:
• top2 > top1 * m (m=0.3)
• top2 class never appeared in the top1 predictions
• top1 class appeared at least once as top2 prediction for another sample image in the test set.
2) Repeat the same switching algorithm above for top 3, 4 and 5 (performed sequentially, first perform all of top2 switches and then top3 switches and so on).</p>
<p><strong>P2: Filling empty classes (~+0.1-0.2% on the public LB)</strong>
1) Go through the predictions after the first post-process, and for all classes that don't appear in the top1 predictions and find the image sample with the highest prediction from all of the top5 predictions of the test set.</p>
<h2>Pre-trained model</h2>
<p>We used pre-trains from ImageNet and iNaturalist 2018. we got almost similar scores, with iNat18 having slightly better scores on the public LB. </p>
<h2>Ensemble (+0.8% on public LB)</h2>
<p>We used an ensemble of 9 models trained from different pretrains (imagenet and iNaturalist2018) and different resolutions, final resolutions: 544x416 mostly and some 620x476.
Public LB score (with post-processing P1+P2): 0.86310</p>
<h2>What didn't work for us (relevant to this competition only):</h2>
<ul>
<li>Fixed cropping by 10-20%</li>
<li>Pseudo-labeling</li>
<li>Mixup</li>
<li>Focal loss</li>
</ul> |
Herbarium 2020 - FGVC7 | 3rd place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Herbarium 2020 - FGVC7
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Sorry for the delay, I'll briefly describe my solution. </p>
<h1><strong>Methods</strong></h1>
<p>I used lip-r50 and seresnext101 as the backbones, and 448 * 448 resolution for training. To save resources, I used 352 * 352 resolution to train seresnext101 and fintune with 448 * 448 resolution in last few epochs. For both of two backbones, I trained for 120 epoch in cosine decay, using labelsmooth and mixup(balanced sampling). I chose the standard cross entropy loss as the loss function.
During testing, 576 * 576 resolution and tencrop were used. Finally, I ensembled the two models above to get the final softmax score(seresnext101_softmaxscore * 0.6 + lip-r50_softmax_score * 0.4).Just like other teams, rescore was used to refine the prediction, becase each category has no more than 10 samples in the test set, so I sorted the samples in test set by conficence score, for those samples whose top-1 category already has 10 samples, I changed them to top-2 category instead, and resort the score. And we got the final score.</p>
<p>I also spent a lot time training some offline classifiers(such as knn and svm) using cnn features, but got little improvement.</p> |
Herbarium 2020 - FGVC7 | 4th method and code | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Herbarium 2020 - FGVC7
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>just by randomly choosing 1k as smallsepval.txt set and other as bigseptrain.txt</p>
<p>for your question
* final score is a single model single feed-forward pass result
* it needs about 2 hours for a epoch and we do not train till the max epoch, we usually stop the task according to the evaluation score on our val set and LB. </p>
<p>4 * 8 1080ti were used for training, and it takes about 2 hour for a epoch and the total training procedure takes about 12 epoch </p>
<p>Hi, we would like to share our method and the code link is <a href="https://github.com/shijieliu/herbarium2020_4th">herbarium2020_4th</a></p>
<h1>Methods</h1>
<p>our method mainly refers the paper <a href="https://arxiv.org/pdf/1904.04232.pdf">A CLOSER LOOK AT FEW-SHOT CLASSIFICATION</a> and requires 2 stage training, and we do a little change to get the best result
1. we use 448 * 448 for training and test
2. we use seresnext50 as our backbone compared with senet154, se_resnet152
3. we add half crop in image aug stage
4. we do not use cosine distance because cosine distance leads to slow converge and low precision
5. we remove No.23079 class in train and test</p> |
Herbarium 2020 - FGVC7 | 5th - baseline with 11 epochs CE+SE-ResNext50 | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Herbarium 2020 - FGVC7
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>find smaller dataset and experiment. there are many things that could go wrong. And you certainly did something wrong because i used the same arch and after one epoch i could score ~0.25</p>
<p>FC params are much faster. you can also add FC 512 layer between 2048 and 32093 and reduce the number of params and hence increase the speed </p>
<p>fp32 or fp16 it should not be so much difference - it gotta be something else</p>
<p>i had no dedicated validation set- i didn't do it because i did not have enough resources for more than one run :) </p>
<p>Congrats to all the winners. </p>
<p>I enjoyed the last year Herbarium2019 where I managed to beat Facebook AI team while using 100x less compute (relaying on kernels only) to train final solution so i hoped i could do something similar this year. Oh boy, how wrong i was....</p>
<p>The size of the dataset knocked me out immediately. Still i decided to run one experiment and that's to rerun the same code form last year and then to play with post-processing :). </p>
<h2>Training the baseline</h2>
<p>I borrowed 1080Ti for extended weekend and run 11 epochs :). I just used cross entropy, trained on whole dataset, had only some small augmentations to enable fast learning, and did not treat imbalance at all. Input size was 1/2 of the original, used accumulation to get to 256 batch size and scaled LR with cosine annealing. You could think of my solution as a default or a baseline. </p>
<h2>Squeezing out the baseline - submit all you got</h2>
<h3>Comparing checkpoints</h3>
<p>I submitted after every epoch and tried to get better understanding of what's going on.
Last two checkpoints scored 0.64+ on public. I noticed that as my score increased with later checkpoints the number of predicted unique classes also linearly increased. It went from epoch 5, 0.55 and 18K to epoch 11, 0.64 and 24K</p>
<h3>Postprocess 1</h3>
<p>I multiplied softmax outputs by the inverse of class frequency and got boost 0.05 and reaached 0.69+ with 27K unique classes. </p>
<h3>Postprocess 2</h3>
<p>The classes in test set had the same distribution as in training set with max clipped at 10. I calculated the difference between the expected counts and my predicted counts, then i adjusted the probabilities accordingly. </p>
<p>For example if I expected a count 10 but got 15 I needed a reduction of 0.66% (adjustment). To prevent to much fluctuation I sqrt normalized the adjustments and repeated it several times. I followed number of uniquely predicted classes and stopped when it taps the 32K. This gave me 0.04+ so i reached ~0.74. Horizontal flip gave me a bit extra 0.005.</p>
<p>thanks,
I hope one day i get a chance to ride like grown up dudes :D</p>
<p>For me adding FC bottleneck (512) reduces GPU consumption for 15% only and epoch time 10% (this could be due to poor CPU)</p> |
Herbarium 2021 - Half-Earth Challenge - FGVC8 | 2nd place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Herbarium 2021 - Half-Earth Challenge - FGVC8
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, Thanks to the organizers for preparing such a great competition </p>
<p><strong>Difficult point of this competition:</strong></p>
<ol>
<li>The dataset for this competition was very large, a lot of GPU computation was required. So, I couldn't do much experimentation.</li>
<li>So, I couldn't train the model for many epochs. This is not able to say that the model is completely converged. </li>
</ol>
<p><strong>Network</strong><br>
The network architecture was based on a configuration that had a good score in the previous year (2nd place). I added a few network layers to it.</p>
<p>input -> Backbone(Feature extractor) -> GAP -> FC (2048x2048) -> BN -> Leaky ReLU -> FC2 (2048x512)-> BN -> Leaky ReLU ->FC3(512x64500)</p>
<p><strong>Loss</strong><br>
+Metric Loss</p>
<ul>
<li>SoftTriple Loss was applied to the oupute of FC2. ( in my experiment, SoftTriple is better than ArcFace)</li>
</ul>
<p>+Classification Loss</p>
<ul>
<li>CrossEntropy was applied to the oupute of FC3 in 1st stage(30epoch).</li>
<li><a href="https://papers.nips.cc/paper/2020/file/2ba61cc3a8f44143e1f2f13b2b729ab3-Paper.pdf" target="_blank">CrossEntropy with Balanced SoftMax </a>was applied to the oupute of FC3 in 2nd stage(5epoch).</li>
</ul>
<p><strong>Augmentation</strong><br>
+Train Phase</p>
<ul>
<li>img size : 448x448</li>
<li>AugMix without JSD Loss (custom implementation with Albu)</li>
<li>pipeline is below</li>
</ul>
<pre><code>A.Resize(random.randint(CFG.img_size[0] + 32 ,
CFG.img_size[0] + 128),
CFG.img_size[1]),
A.RandomCrop(CFG.img_size[0], CFG.img_size[0]),
A.HorizontalFlip(p=0.5),
RandomAugMix(severity=3, width=3, alpha=1.0, p=1.0),
A.Cutout(p=0.5),
ToTensorV2()
</code></pre>
<p>+Test Phase </p>
<ul>
<li>TTA 5 fold</li>
</ul>
<pre><code>A.Resize(random.randint(CFG.img_size[0] + 16,
CFG.img_size[0] + 64),
CFG.img_size[1]),
A.RandomCrop(CFG.img_size[0], CFG.img_size[0]),
A.HorizontalFlip(p=0.5),
A.Normalize(
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225],
max_pixel_value=255.0,
p=1.0)
</code></pre>
<p><strong>Optimizer and Other tricks</strong><br>
-AdamP : lr=0.001, no weight decay<br>
-CosineAnnealingLR<br>
-Exponential Moving Average<br>
-AMP (fp16)</p>
<p><strong>Training and Validation Strategy</strong></p>
<ul>
<li>I trained the model for a total of 35 epochs on the 1st stage(30) and the 2nd stage(5).</li>
<li>If there was enough time and models were trained with more epochs, it would have performed better.</li>
<li>When I tried the 80:20 train-valid split, I saw the result of not overfitting during training (During 40 epoch). Therefore, model was trained without validation</li>
</ul>
<p><strong>Post-Process (switching)</strong><br>
I switched the 1st and 2nd confidence class when satisfying each the 2 cases.<br>
case 1:</p>
<ul>
<li>top_2_conf >= (top_1_conf * 0.7)</li>
<li>top_2_class not in top_1_conf list</li>
<li>top_1_class appears more than 2 in top_1_conf</li>
</ul>
<p>case 2:</p>
<ul>
<li>top_2_conf >= (top_1_conf * 0.6)</li>
<li>top_2_class not in top_1_conf list</li>
<li>top_1_class appears more than 15 in top_1_conf</li>
</ul>
<p><strong>Model Score</strong></p>
<ul>
<li><p>TResNet-M-448 (2 stage and TTA 5 fold) [ Public : 0.72937, Private : 0.68392 ] </p></li>
<li><p>TResNet-M-21k (2 stage and TTA 5 fold) [ sorry, no submission :( ] </p></li>
<li><p>TResNet-L-448 (2 stage and TTA 5 fold) [ Public : 0.75006, Private : 0.70231 ] </p></li>
<li><p>GENet-L (2 stage and TTA 5 fold) [ Public : 0.71862, Private : 0.67339 ] </p></li>
<li><p>ECA-NFNet-L0 (2 stage and TTA 5 fold) [ Public : 0.73398, Private : 0.68774 ] </p></li>
<li><p>Soft-Voting 5 model and Post-Process [ Public : 0.77625, Private : 0.73534 ] (Final Result)</p></li>
</ul>
<p>If you have any additional questions, please leave a comment. I hope my solution is helpful. :)</p>
<p>Hi Riccardo,</p>
<p>Thank you for your reply. Within today, I will fill out the Google Form!</p>
<p>Best,<br>
Haechan</p> |
Herbarium 2022 - FGVC9 | 1st place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Herbarium 2022 - FGVC9
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><ul>
<li><p>Optimization on single model:</p>
<table>
<thead>
<tr>
<th>change</th>
<th>public</th>
<th>private</th>
</tr>
</thead>
<tbody>
<tr>
<td>swin-B 224 baseline</td>
<td>0.77223</td>
<td>0.78442</td>
</tr>
<tr>
<td>add cross-entropy losses on genera,species, family</td>
<td>0.7829</td>
<td>0.79544</td>
</tr>
<tr>
<td>learning rate from 2e-4 to 5e-4</td>
<td>0.79444</td>
<td>0.80501</td>
</tr>
<tr>
<td>+5crop (resize 256, 288, 320, 384, 448 and center crop 224)</td>
<td>0.80088</td>
<td>0.80981</td>
</tr>
<tr>
<td>replacing cross-entropy by subcenter-arcface with <a href="https://www.kaggle.com/c/landmark-recognition-2020/discussion/187757" target="_blank">dynamic</a> margins</td>
<td>0.81532</td>
<td>0.82267</td>
</tr>
<tr>
<td>+ extra fc by cross-entropy, add to subcenter-arcface output after softmax</td>
<td>0.82201</td>
<td>0.82929</td>
</tr>
<tr>
<td>+ data augmentation the same with Swin-Transformer</td>
<td>0.8315</td>
<td>0.83554</td>
</tr>
<tr>
<td>from swin-B224 to SwinB384</td>
<td>0.8468</td>
<td>0.85245</td>
</tr>
<tr>
<td>+5crop (resize 400, 416, 448, 480, 512 and center crop 384)</td>
<td>0.84895</td>
<td>0.85654</td>
</tr>
<tr>
<td>+Freeze layers (not update parameters) from 100 to 0</td>
<td>0.85499</td>
<td>0.86055</td>
</tr>
<tr>
<td>+square resize and test crop from 384</td>
<td>0.85563</td>
<td>0.86201</td>
</tr>
<tr>
<td>+from swin base to swin V2</td>
<td>0.85851</td>
<td>0.86282</td>
</tr>
</tbody>
</table></li>
<li><p>Backbones:</p>
<table>
<thead>
<tr>
<th>id</th>
<th>backbone</th>
<th>public</th>
<th>private</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>swin-B</td>
<td>0.85499</td>
<td>0.86055</td>
</tr>
<tr>
<td>1</td>
<td>convnext-B</td>
<td>0.85464</td>
<td>0.85956</td>
</tr>
<tr>
<td>2</td>
<td>deit-iii B</td>
<td>0.84423</td>
<td>0.8495</td>
</tr>
<tr>
<td>3</td>
<td>resnest-101</td>
<td>0.83208</td>
<td>0.83838</td>
</tr>
<tr>
<td>4</td>
<td>efficient-B6</td>
<td>0.83841</td>
<td>0.84321</td>
</tr>
<tr>
<td>5</td>
<td>cswin-L</td>
<td>0.84373</td>
<td>0.8501</td>
</tr>
<tr>
<td>6</td>
<td>swin-L</td>
<td>0.85057</td>
<td>0.85607</td>
</tr>
<tr>
<td>7</td>
<td>swinv2-B</td>
<td>0.85851</td>
<td>0.86282</td>
</tr>
</tbody>
</table></li>
<li><p>Fusion:<br>
we utilize the scores on public dataset as the weight. </p>
<table>
<thead>
<tr>
<th>fusion approach</th>
<th>public</th>
<th>private</th>
</tr>
</thead>
<tbody>
<tr>
<td>fusion models (0,1,2)</td>
<td>0.86366</td>
<td>0.8679</td>
</tr>
<tr>
<td>fusion models (0,1,2,3,4)</td>
<td>0.86619</td>
<td>0.87185</td>
</tr>
<tr>
<td>fusion models (0,1,2,3,4,5,6,7)</td>
<td>0.87193</td>
<td>0.87662</td>
</tr>
</tbody>
</table></li>
<li><p>Other settings:<br>
Batch size is always 512 (256 for big models), trained with 100epochs.<br>
torch.amp is adopted.<br>
Following <a href="https://www.kaggle.com/competitions/herbarium-2021-fgvc8/discussion/242233" target="_blank">solutions</a>, we adopt similar Post-Process and obtain 0.001 improvement.<br>
We use Swin-B pretrained on Imagenet22k for initialization, for other backbones, we try to use Imagenet22k pretrained weights if possible.<br>
We freeze partial layers to enlarge the batchsize during the training process, We then reduce the batch size and train the full parameters. </p></li>
<li><p>Something less effective:<br>
class-aware sampling<br>
data cleaning </p></li>
</ul> |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.