Datasets:

Modalities:
Tabular
ArXiv:
DOI:
License:
DuDie72 commited on
Commit
8ace65b
·
1 Parent(s): f4a6035

added new github examples to dataset

Browse files
Files changed (1) hide show
  1. README.md +34 -10
README.md CHANGED
@@ -227,15 +227,39 @@ configs:
227
  - "pbt/cc_pendulum_sac.csv"
228
  ---
229
 
230
- # The ARLBench Performance Dataset
231
 
232
- [ARLBench](https://github.com/automl/arlbench) is a benchmark for hyperparameter optimization in Reinforcement Learning.
233
- Since we performed several thousand runs on the benchmark to find meaningful HPO test settings in RL, we collect them in this dataset for future use.
234
- These runs could be used to meta-learn information about the hyperparameter landscape or warmstart HPO tools.
235
 
236
- In detail, it contains each 10 runs for the landscape data of PPO, DQN and SAC respectively on the Atari-5 environments, four XLand gridworlds, four Brax walkers, five classic control and two Box2D environments.
237
- Additionally, it contains each 3 runs for the 5 optimzation algorithms PBT, SMAC, SMAC with Multi-Fidelity and Random Search for each algorithm and environment pair.
238
- The dataset follows the mapping:
239
- $$\text{Training Budget and Seed, Hyperparameter Configuration} \mapsto \text{Training Performance}$$
240
- For the optimization runs, it additionally includes the key *optimization seed* to distinguish configurations between the 5 optimization runs for each algorithm/environment pair.
241
- For more information, refer to the [ARLBench](https://arxiv.org/abs/2409.18827) paper.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
227
  - "pbt/cc_pendulum_sac.csv"
228
  ---
229
 
230
+ # **The ARLBench Performance Dataset**
231
 
232
+ [**ARLBench**](https://github.com/automl/arlbench) is a benchmark designed for **hyperparameter optimization (HPO) in Reinforcement Learning (RL)**. Given that we conducted several thousand runs to identify meaningful HPO test settings for RL, we have compiled these results into a dataset for future research and applications.
 
 
233
 
234
+ This dataset can be leveraged to:
235
+ - **Meta-learn insights** about the hyperparameter landscape in RL.
236
+ - **Warm-start HPO tools** by utilizing previously explored configurations.
237
+
238
+ ### **Dataset Details**
239
+ The dataset includes:
240
+ - **Landscape data:** 10 runs each for PPO, DQN, and SAC across:
241
+ - Atari-5 environments
242
+ - Four XLand gridworlds
243
+ - Four Brax walkers
244
+ - Five classic control environments
245
+ - Two Box2D environments
246
+ - **Optimization data:** 3 runs per optimization algorithm for each algorithm-environment combination, covering:
247
+ - Population-Based Training (PBT)
248
+ - SMAC
249
+ - SMAC with Multi-Fidelity
250
+ - Random Search
251
+
252
+ ### **Dataset Mapping**
253
+ The dataset follows this mapping:
254
+ $$\text{training steps, seed, hyperparameter configuration} \mapsto \text{training performance}$$
255
+
256
+ For optimization runs, it additionally includes:
257
+ - **Optimization seed**: Differentiates between the five optimization runs per algorithm-environment pair.
258
+ - **Optimization step**: Tracks configurations evaluated at different steps.
259
+
260
+ ### **Example Usage**
261
+ You can find example notebooks demonstrating how to use:
262
+ - **[Landscape data](https://github.com/automl/arlbench/blob/main/examples/landscape_analysis.ipynb)**
263
+ - **[Optimization data](https://github.com/automl/arlbench/blob/main/examples/optimization_data_analysis.ipynb)**
264
+
265
+ For more details, refer to the **[ARLBench paper](https://arxiv.org/abs/2409.18827)**.