Spaces:
Running
Running
MilesCranmer
commited on
Commit
•
fc67c56
1
Parent(s):
87880d1
Remove unused TODO file
Browse files
TODO.md
DELETED
@@ -1,142 +0,0 @@
|
|
1 |
-
# TODO
|
2 |
-
|
3 |
-
- [x] Async threading, and have a server of equations. So that threads aren't waiting for others to finish.
|
4 |
-
- [x] Print out speed of equation evaluation over time. Measure time it takes per cycle
|
5 |
-
- [x] Add ability to pass an operator as an anonymous function string. E.g., `binary_operators=["g(x, y) = x+y"]`.
|
6 |
-
- [x] Add error bar capability (thanks Johannes Buchner for suggestion)
|
7 |
-
- [x] Why don't the constants continually change? It should optimize them every time the equation appears.
|
8 |
-
- Restart the optimizer to help with this.
|
9 |
-
- [x] Add several common unary and binary operators; list these.
|
10 |
-
- [x] Try other initial conditions for optimizer
|
11 |
-
- [x] Make scaling of changes to constant a hyperparameter
|
12 |
-
- [x] Make deletion op join deleted subtree to parent
|
13 |
-
- [x] Update hall of fame every iteration?
|
14 |
-
- Seems to overfit early if we do this.
|
15 |
-
- [x] Consider adding mutation to pass an operator in through a new binary operator (e.g., exp(x3)->plus(exp(x3), ...))
|
16 |
-
- (Added full insertion operator
|
17 |
-
- [x] Add a node at the top of a tree
|
18 |
-
- [x] Insert a node at the top of a subtree
|
19 |
-
- [x] Record very best individual in each population, and return at end.
|
20 |
-
- [x] Write our own tree copy operation; deepcopy() is the slowest operation by far.
|
21 |
-
- [x] Hyperparameter tune
|
22 |
-
- [x] Create a benchmark for accuracy
|
23 |
-
- [x] Add interface for either defining an operation to learn, or loading in arbitrary dataset.
|
24 |
-
- Could just write out the dataset in julia, or load it.
|
25 |
-
- [x] Create a Python interface
|
26 |
-
- [x] Explicit constant optimization on hall-of-fame
|
27 |
-
- Create method to find and return all constants, from left to right
|
28 |
-
- Create method to find and set all constants, in same order
|
29 |
-
- Pull up some optimization algorithm and add it. Keep the package small!
|
30 |
-
- [x] Create a benchmark for speed
|
31 |
-
- [x] Simplify subtrees with only constants beneath them. Or should I? Maybe randomly simplify sometimes?
|
32 |
-
- [x] Record hall of fame
|
33 |
-
- [x] Optionally (with hyperparameter) migrate the hall of fame, rather than current bests
|
34 |
-
- [x] Test performance of reduced precision integers
|
35 |
-
- No effect
|
36 |
-
- [x] Create struct to pass through all hyperparameters, instead of treating as constants
|
37 |
-
- Make sure doesn't affect performance
|
38 |
-
- [x] Rename package to avoid trademark issues
|
39 |
-
- PySR?
|
40 |
-
- [x] Put on PyPI
|
41 |
-
- [x] Treat baseline as a solution.
|
42 |
-
- [x] Print score alongside MSE: \delta \log(MSE)/\delta \log(complexity)
|
43 |
-
- [x] Calculating the loss function - there is duplicate calculations happening.
|
44 |
-
- [x] Declaration of the weights array every iteration
|
45 |
-
- [x] Sympy evaluation
|
46 |
-
- [x] Threaded recursion
|
47 |
-
- [x] Test suite
|
48 |
-
- [x] Performance: - Use an enum for functions instead of storing them?
|
49 |
-
- Gets ~40% speedup on small test.
|
50 |
-
- [x] Use @fastmath
|
51 |
-
- [x] Try @spawn over each sub-population. Do random sort, compute mutation for each, then replace 10% oldest.
|
52 |
-
- [x] Control max depth, rather than max number of nodes?
|
53 |
-
- [x] Allow user to pass names for variables - use these when printing
|
54 |
-
- [x] Check for domain errors in an equation quickly before actually running the entire array over it. (We do this now recursively - every single equation is checked for nans/infs when being computed.)
|
55 |
-
- [x] read the docs page
|
56 |
-
- [x] Create backup csv file so always something to copy from for `PySR`. Also use random hall of fame file by default. Call function to read from csv after running, so dont need to run again. Dump scores alongside MSE to .csv (and return with Pandas).
|
57 |
-
- [x] Better cleanup of zombie processes after <ctl-c>
|
58 |
-
- [x] Consider printing output sorted by score, not by complexity.
|
59 |
-
- [x] Increase max complexity slowly over time up to the actual max.
|
60 |
-
- [x] Record density over complexity. Favor equations that have a density we have not explored yet. Want the final density to be evenly distributed.
|
61 |
-
- [x] Do printing from Python side. Then we can do simplification and pretty-printing.
|
62 |
-
- [x] Sympy printing
|
63 |
-
- [x] Store Project.toml inside PySR's python code, rather than copied to site-packages.
|
64 |
-
- [ ] Sort these todo lists by priority
|
65 |
-
|
66 |
-
- [ ] Automatically convert log, log10, log2, pow to the correct operators.
|
67 |
-
- [ ] I think the simplification isn't working correctly (post-merging SymbolicUtils.)
|
68 |
-
- [ ] Show demo of PySRRegressor. Fit equations, then show how to view equations.
|
69 |
-
- [ ] Add "selected" column string to regular equations dict.
|
70 |
-
- [ ] List "Loss" instead of "MSE"
|
71 |
-
|
72 |
-
## Feature ideas
|
73 |
-
|
74 |
-
- [ ] Other default losses (e.g., abs, other likelihoods, or just allow user to pass this as a string).
|
75 |
-
- [ ] Other dtypes available
|
76 |
-
- [ ] NDSA-II
|
77 |
-
- [ ] Cross-validation
|
78 |
-
- [ ] Hierarchical model, so can re-use functional forms. Output of one equation goes into second equation?
|
79 |
-
- [ ] Add function to plot equations
|
80 |
-
- [ ] Refresh screen rather than dumping to stdout?
|
81 |
-
- [ ] Add ability to save state from python
|
82 |
-
- [ ] Additional degree operators?
|
83 |
-
- [ ] Multi targets (vector ops). Idea 1: Node struct contains argument for which registers it is applied to. Then, can work with multiple components simultaneously. Though this may be tricky to get right. Idea 2: each op is defined by input/output space. Some operators are flexible, and the spaces should be adjusted automatically. Otherwise, only consider ops that make a tree possible. But will need additional ops here to get it to work. Idea 3: define each equation in 2 parts: one part that is shared between all outputs, and one that is different between all outputs. Maybe this could be an array of nodes corresponding to each output. And those nodes would define their functions.
|
84 |
-
- Much easier option: simply flatten the output vector, and set the index as another input feature. The equation learned will be a single equation containing indices as a feature.
|
85 |
-
- [ ] Tree crossover? I.e., can take as input a part of the same equation, so long as it is the same level or below?
|
86 |
-
- [ ] Create flexible way of providing "simplification recipes." I.e., plus(plus(T, C), C) => plus(T, +(C, C)). The user could pass these.
|
87 |
-
- [ ] Consider allowing multi-threading turned off, for faster testing (cache issue on travis). Or could simply fix the caching issue there.
|
88 |
-
- [ ] Consider returning only the equation of interest; rather than all equations.
|
89 |
-
- [ ] Enable derivative operators. These would differentiate their right argument wrt their left argument, some input variable.
|
90 |
-
|
91 |
-
## Algorithmic performance ideas:
|
92 |
-
|
93 |
-
|
94 |
-
- [ ] Use package compiler and compile sr.jl into a standalone binary that can be used by pysr.
|
95 |
-
- [ ] When doing equation warmup, only migrate those equations with almost the same complexity. Rather than having to consider simple equations later in the game.
|
96 |
-
- [ ] Right now we only update the score based on some. Need to update score based on entire data! Note that optimizer only is used sometimes.
|
97 |
-
- [ ] Idea: use gradient of equation with respect to each operator (perhaps simply add to each operator) to tell which part is the most "sensitive" to changes. Then, perhaps insert/delete/mutate on that part of the tree?
|
98 |
-
- [ ] Start populations staggered; so that there is more frequent printing (and pops that start a bit later get hall of fame already)?
|
99 |
-
- [ ] Consider adding mutation for constant<->variable
|
100 |
-
- [ ] Implement more parts of the original Eureqa algorithms: https://www.creativemachineslab.com/eureqa.html
|
101 |
-
- [ ] Experiment with freezing parts of model; then we only append/delete at end of tree.
|
102 |
-
- [ ] Use NN to generate weights over all probability distribution conditional on error and existing equation, and train on some randomly-generated equations
|
103 |
-
- [ ] For hierarchical idea: after running some number of iterations, do a search for "most common pattern". Then, turn that subtree into its own operator.
|
104 |
-
- [ ] Calculate feature importances based on features we've already seen, then weight those features up in all random generations.
|
105 |
-
- [ ] Calculate feature importances of future mutations, by looking at correlation between residual of model, and the features.
|
106 |
-
- Store feature importances of future, and periodically update it.
|
107 |
-
- [ ] Punish depth rather than size, as depth really hurts during optimization.
|
108 |
-
|
109 |
-
|
110 |
-
## Code performance ideas:
|
111 |
-
|
112 |
-
- [ ] How hard is it to turn the recursive array evaluation into a for loop?
|
113 |
-
- [ ] Try defining a binary tree as an array, rather than a linked list. See https://stackoverflow.com/a/6384714/2689923
|
114 |
-
- in array branch
|
115 |
-
- [ ] Add true multi-node processing, with MPI, or just file sharing. Multiple populations per core.
|
116 |
-
- Ongoing in cluster branch
|
117 |
-
- [ ] Performance: try inling things?
|
118 |
-
- [ ] Try storing things like number nodes in a tree; then can iterate instead of counting
|
119 |
-
|
120 |
-
```julia
|
121 |
-
mutable struct Tree
|
122 |
-
degree::Array{Integer, 1}
|
123 |
-
val::Array{Float32, 1}
|
124 |
-
constant::Array{Bool, 1}
|
125 |
-
op::Array{Integer, 1}
|
126 |
-
Tree(s::Integer) = new(zeros(Integer, s), zeros(Float32, s), zeros(Bool, s), zeros(Integer, s))
|
127 |
-
end
|
128 |
-
```
|
129 |
-
|
130 |
-
- Then, we could even work with trees on the GPU, since they are all pre-allocated arrays.
|
131 |
-
- A population could be a Tree, but with degree 2 on all the degrees. So a slice of population arrays forms a tree.
|
132 |
-
- How many operations can we do via matrix ops? Mutate node=>easy.
|
133 |
-
- Can probably batch and do many operations at once across a population.
|
134 |
-
- Or, across all populations! Mutate operator: index 2D array and set it to random vector? But the indexing might hurt.
|
135 |
-
- The big advantage: can evaluate all new mutated trees at once; as massive matrix operation.
|
136 |
-
- Can control depth, rather than maxsize. Then just pretend all trees are full and same depth. Then we really don't need to care about depth.
|
137 |
-
|
138 |
-
- [ ] Can we cache calculations, or does the compiler do that? E.g., I should only have to run exp(x0) once; after that it should be read from memory.
|
139 |
-
- Done on caching branch. Currently am finding that this is quiet slow (presumably because memory allocation is the main issue).
|
140 |
-
- [ ] Add GPU capability?
|
141 |
-
- Not sure if possible, as binary trees are the real bottleneck.
|
142 |
-
- Could generate on CPU, evaluate score on GPU?
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|