DSBench / data_modeling /data /task /playground-series-s3e18.txt
liqiang888's picture
Upload 738 files
fe8d248 verified
raw
history blame
2.98 kB
Welcome to the 2023 edition of Kaggle's Playground Series! Thank you to everyone who participated in and contributed to Season 3 Playground Series so far!
With the same goal to give the Kaggle community a variety of fairly light-weight challenges that can be used to learn and sharpen skills in different aspects of machine learning and data science, we will continue launching the Tabular Tuesday in June every Tuesday 00:00 UTC, with each competition running for 2 weeks. Again, these will be fairly light-weight datasets that are synthetically generated from real-world data, and will provide an opportunity to quickly iterate through various model and feature engineering ideas, create visualizations, etc.
💡 Getting Started Notebook To get started quickly, feel free to take advantage of this starter notebook.
Synthetically-Generated Datasets Using synthetic data for Playground competitions allows us to strike a balance between having real-world data (with named features) and ensuring test labels are not publicly available. This allows us to host competitions with more interesting datasets than in the past. While there are still challenges with synthetic data generation, the state-of-the-art is much better now than when we started the Tabular Playground Series two years ago, and that goal is to produce datasets that have far fewer artifacts. Please feel free to give us feedback on the datasets for the different competitions so that we can continue to improve!
Evaluation Submissions are evaluated on the area under the ROC curve between the predicted probability and the ground truth for each target, and the final score is the average of the individual AUCs of each predicted column.
Submission File For each id in the test set, you must predict the value for the targets EC1 and EC2. The file should contain a header and have the following format:
```
id,EC1,EC2
14838,0.22,0.71
14839,0.78,0.43
14840,0.53,0.11
etc.
```
Dataset Description The dataset for this competition (both train and test) was generated from a deep learning model trained on a portion of the Multi-label Classification of enzyme substrates. This dataset only uses a subset of features from the original (the features that had the most signal). Feature distributions are close to, but not exactly the same as the original. Feel free to use the original dataset as part of this competition, both to explore differences as well as to see whether incorporating the original in training improves model performance.
Note: For this challenge, you are given 6 features in the training data but only asked to predict the first two features (EC1 and EC2).
Files:
- `train.csv` - the training dataset; [EC1 - EC6] are the (binary) targets, although you are only asked to predict EC1 and EC2.
- `test.csv` - the test dataset; your objective is to predict the probability of the two targets EC1 and EC2.
- `sample_submission.csv` - a sample submission file in the correct format.