Spaces:
Running
Running
MilesCranmer
commited on
Commit
•
44d381d
1
Parent(s):
5908dc9
Update todo
Browse files
README.md
CHANGED
@@ -282,6 +282,8 @@ pd.DataFrame, Results dataframe, giving complexity, MSE, and equations
|
|
282 |
- [x] Print score alongside MSE: \delta \log(MSE)/\delta \log(complexity)
|
283 |
- [x] Calculating the loss function - there is duplicate calculations happening.
|
284 |
- [x] Declaration of the weights array every iteration
|
|
|
|
|
285 |
- [ ] Add true multi-node processing, with MPI, or just file sharing. Multiple populations per core.
|
286 |
- Ongoing in cluster branch
|
287 |
- [ ] Consider allowing multi-threading turned off, for faster testing (cache issue on travis). Or could simply fix the caching issue there.
|
@@ -297,7 +299,6 @@ pd.DataFrame, Results dataframe, giving complexity, MSE, and equations
|
|
297 |
- [ ] Implement more parts of the original Eureqa algorithms: https://www.creativemachineslab.com/eureqa.html
|
298 |
- [ ] Experiment with freezing parts of model; then we only append/delete at end of tree.
|
299 |
- [ ] Sympy printing
|
300 |
-
- [ ] Sympy evaluation
|
301 |
- [ ] Consider adding mutation for constant<->variable
|
302 |
- [ ] Hierarchical model, so can re-use functional forms. Output of one equation goes into second equation?
|
303 |
- [ ] Use NN to generate weights over all probability distribution conditional on error and existing equation, and train on some randomly-generated equations
|
@@ -305,7 +306,6 @@ pd.DataFrame, Results dataframe, giving complexity, MSE, and equations
|
|
305 |
- Not sure if possible, as binary trees are the real bottleneck.
|
306 |
- [ ] Performance:
|
307 |
- Use an enum for functions instead of storing them?
|
308 |
-
- Threaded recursion?
|
309 |
- [ ] Idea: use gradient of equation with respect to each operator (perhaps simply add to each operator) to tell which part is the most "sensitive" to changes. Then, perhaps insert/delete/mutate on that part of the tree?
|
310 |
- [ ] For hierarchical idea: after running some number of iterations, do a search for "most common pattern". Then, turn that subtree into its own operator.
|
311 |
- [ ] Additional degree operators?
|
|
|
282 |
- [x] Print score alongside MSE: \delta \log(MSE)/\delta \log(complexity)
|
283 |
- [x] Calculating the loss function - there is duplicate calculations happening.
|
284 |
- [x] Declaration of the weights array every iteration
|
285 |
+
- [x] Sympy evaluation
|
286 |
+
- [x] Threaded recursion
|
287 |
- [ ] Add true multi-node processing, with MPI, or just file sharing. Multiple populations per core.
|
288 |
- Ongoing in cluster branch
|
289 |
- [ ] Consider allowing multi-threading turned off, for faster testing (cache issue on travis). Or could simply fix the caching issue there.
|
|
|
299 |
- [ ] Implement more parts of the original Eureqa algorithms: https://www.creativemachineslab.com/eureqa.html
|
300 |
- [ ] Experiment with freezing parts of model; then we only append/delete at end of tree.
|
301 |
- [ ] Sympy printing
|
|
|
302 |
- [ ] Consider adding mutation for constant<->variable
|
303 |
- [ ] Hierarchical model, so can re-use functional forms. Output of one equation goes into second equation?
|
304 |
- [ ] Use NN to generate weights over all probability distribution conditional on error and existing equation, and train on some randomly-generated equations
|
|
|
306 |
- Not sure if possible, as binary trees are the real bottleneck.
|
307 |
- [ ] Performance:
|
308 |
- Use an enum for functions instead of storing them?
|
|
|
309 |
- [ ] Idea: use gradient of equation with respect to each operator (perhaps simply add to each operator) to tell which part is the most "sensitive" to changes. Then, perhaps insert/delete/mutate on that part of the tree?
|
310 |
- [ ] For hierarchical idea: after running some number of iterations, do a search for "most common pattern". Then, turn that subtree into its own operator.
|
311 |
- [ ] Additional degree operators?
|