MilesCranmer commited on
Commit
f570496
1 Parent(s): 941b663

More ideas to todo

Browse files
Files changed (1) hide show
  1. README.md +8 -7
README.md CHANGED
@@ -79,19 +79,20 @@ weights = [8, 1, 1, 1, 2]
79
  - [ ] Create a Python interface
80
  - [x] Create a benchmark for speed
81
  - [ ] Create a benchmark for accuracy
82
- - [ ] Record hall of fame
83
- - [ ] Optionally (with hyperparameter) migrate the hall of fame, rather than current bests
84
  - [x] Test performance of reduced precision integers
85
  - No effect
86
  - [ ] Create struct to pass through all hyperparameters, instead of treating as constants
87
  - Make sure doesn't affect performance
88
  - [ ] Hyperparameter tune
 
89
  - [ ] Use NN to generate weights over all probability distribution, and train on some randomly-generated equations
90
  - [ ] Performance:
91
- - Use an enum for functions instead of storing them?
92
  - Current most expensive operations:
93
- - deepcopy() before the mutate, to see whether to accept or not.
94
- - (This is only for simulated annealing... but generally we don't need to use this)
95
- - Calculating the loss function - there is duplicate calculations happening.
96
- - Declaration of the weights array every iteration
97
 
 
79
  - [ ] Create a Python interface
80
  - [x] Create a benchmark for speed
81
  - [ ] Create a benchmark for accuracy
82
+ - [x] Record hall of fame
83
+ - [x] Optionally (with hyperparameter) migrate the hall of fame, rather than current bests
84
  - [x] Test performance of reduced precision integers
85
  - No effect
86
  - [ ] Create struct to pass through all hyperparameters, instead of treating as constants
87
  - Make sure doesn't affect performance
88
  - [ ] Hyperparameter tune
89
+ - [ ] Simplify subtrees with only constants beneath them. Or should I? Maybe randomly simplify sometimes?
90
  - [ ] Use NN to generate weights over all probability distribution, and train on some randomly-generated equations
91
  - [ ] Performance:
92
+ - [ ] Use an enum for functions instead of storing them?
93
  - Current most expensive operations:
94
+ - [x] deepcopy() before the mutate, to see whether to accept or not.
95
+ - Seems like its necessary right now. But still by far the slowest option.
96
+ - [ ] Calculating the loss function - there is duplicate calculations happening.
97
+ - [ ] Declaration of the weights array every iteration
98