|
Welcome to the 2023 edition of Kaggle's Playground Series! Thank you to everyone who participated in and contributed to Season 3 Playground Series so far! |
|
|
|
With the same goal to give the Kaggle community a variety of fairly light-weight challenges that can be used to learn and sharpen skills in different aspects of machine learning and data science, we will continue launching the Tabular Tuesday in March every Tuesday 00:00 UTC, with each competition running for 2 weeks. Again, these will be fairly light-weight datasets that are synthetically generated from real-world data, and will provide an opportunity to quickly iterate through various model and feature engineering ideas, create visualizations, etc. |
|
|
|
Synthetically-Generated Datasets |
|
Using synthetic data for Playground competitions allows us to strike a balance between having real-world data (with named features) and ensuring test labels are not publicly available. This allows us to host competitions with more interesting datasets than in the past. While there are still challenges with synthetic data generation, the state-of-the-art is much better now than when we started the Tabular Playground Series two years ago, and the goal is to produce datasets that have far fewer artifacts. Please feel free to give us feedback on the datasets for the different competitions so that we can continue to improve! |
|
|
|
Evaluation |
|
Submissions are scored on the log loss: |
|
\[ \text{LogLoss} = - \frac{1}{n} \sum_{i=1}^n \left[ y_i \log(\hat{y}_i) + (1 - y_i) \log(1 - \hat{y}_i) \right] \] |
|
|
|
where |
|
\( n \) is the number of rows in the test set |
|
\( \hat{y}_i \) is the predicted probability the Class is a pulsar |
|
\( y_i \) is 1 if Class is pulsar, otherwise 0 |
|
\( \log \) is the natural logarithm |
|
|
|
The use of the logarithm provides extreme punishments for being both confident and wrong. In the worst possible case, a prediction that something is true when it is actually false will add an infinite amount to your error score. In order to prevent this, predictions are bounded away from the extremes by a small value. |
|
|
|
Submission File |
|
For each id in the test set, you must predict the value for the target Class. The file should contain a header and have the following format: |
|
``` |
|
id,Class |
|
117564,0.11 |
|
117565,0.32 |
|
117566,0.95 |
|
etc. |
|
``` |
|
|
|
Dataset Description |
|
The dataset for this competition (both train and test) was generated from a deep learning model trained on the Pulsar Classification. Feature distributions are close to, but not exactly the same, as the original. Feel free to use the original dataset as part of this competition, both to explore differences as well as to see whether incorporating the original in training improves model performance. |
|
|
|
Files |
|
- train.csv - the training dataset; Class is the (binary) target |
|
- test.csv - the test dataset; your objective is to predict the probability of Class (whether the observation is a pulsar) |
|
- sample_submission.csv - a sample submission file in the correct format |