File size: 2,797 Bytes
841d7fc
 
c3d240e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
907cc73
c3d240e
 
841d7fc
 
 
 
c3d240e
841d7fc
 
 
295c6bd
5eeffc0
295c6bd
 
8ff33c6
841d7fc
 
5eeffc0
295c6bd
d3b42d5
295c6bd
841d7fc
8ff33c6
a369299
 
 
8ff33c6
 
 
 
 
5eeffc0
8ff33c6
5eeffc0
8ff33c6
 
 
5eeffc0
8ff33c6
5eeffc0
8ff33c6
 
 
5eeffc0
e2fe9b0
8ff33c6
 
e2fe9b0
8ff33c6
382662a
 
 
37f71ff
 
 
 
 
eac89da
 
37f71ff
ec48038
37f71ff
 
 
ec48038
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
# Running:

You can run the performance benchmark with `./benchmark.sh`.

Modify the search code in `paralleleureqa.jl` and `eureqa.jl` to your liking
(see below for options). Then, in a new Julia file called
`myfile.jl`, you can write:

```julia
include("paralleleureqa.jl")
fullRun(10,
    npop=100,
    annealing=true,
    ncyclesperiteration=1000,
    fractionReplaced=0.1f0,
    verbosity=100)
```
The first arg is the number of migration periods to run,
with `ncyclesperiteration` determining how many generations
per migration period.  `npop` is the number of population members.
`annealing` determines whether to stay in exploration mode,
or tune it down with each cycle. `fractionReplaced` is
how much of the population is replaced by migrated equations each
step.


Run it with threading turned on using:
`julia --threads auto -O3 myfile.jl`

## Modification

You can change the binary and unary operators in `eureqa.jl` here:
```julia
const binops = [plus, mult]
const unaops = [sin, cos, exp];
```
E.g., you can add the function for powers with:
```julia
pow(x::Float32, y::Float32)::Float32 = sign(x)*abs(x)^y
const binops = [plus, mult, pow]
```

You can change the dataset here:
```julia
const X = convert(Array{Float32, 2}, randn(100, 5)*2)
# Here is the function we want to learn (x2^2 + cos(x3))
const y = convert(Array{Float32, 1}, ((cx,)->cx^2).(X[:, 2]) + cos.(X[:, 3]))
```
by either loading in a dataset, or modifying the definition of `y`.

### Hyperparameters

Annealing allows each evolutionary cycle to turn down the exploration
rate over time: at the end (temperature 0), it will only select solutions
better than existing solutions.

The following parameter, parsimony, is how much to punish complex solutions:
```julia
const parsimony = 0.01
```

Finally, the following
determins how much to scale temperature by (T between 0 and 1).
```julia
const alpha = 10.0
```
Larger alpha means more exploration.

One can also adjust the relative probabilities of each mutation here:
```julia
weights = [8, 1, 1, 1, 2]
```
(for: 1. perturb constant, 2. mutate operator,
3. append a node, 4. delete a subtree, 5. do nothing).


# TODO

- [ ] Create a Python interface
- [x] Create a benchmark for speed
- [ ] Create a benchmark for accuracy
- [ ] Record hall of fame
- [ ] Optionally (with hyperparameter) migrate the hall of fame, rather than current bests
- [x] Test performance of reduced precision integers
    - No effect
- [ ] Create struct to pass through all hyperparameters, instead of treating as constants
    - Make sure doesn't affect performance
- [ ] Hyperparameter tune
- [ ] Use NN to generate weights over all probability distribution, and train on some randomly-generated equations
- [ ] Performance:
    - Use an enum for functions instead of storing them?