Created: 02 Apr 2014 | Modified: 30 Jun 2016 | BibTeX Entry | RIS Citation |
A bug in the innovation code was causing innovations not to happen unless a copying event also happened. I am scrapping previous data because even though the case \(\mu = 0.0\) might be alright, I also think that we need to be comparing time slices, and the new data seem to have different mean values for some of the key measurables. So I’m going to blast a new set of runs.
Changes:
Simulations run to 10 million ticks, regardless. We sample from 6MM to 10MM ticks, every 1MM ticks taking a sample, giving us 5 samples per simulation run (to detect non-stationarity).
I record the timing of runs. Currently, on my laptop and EC2 instances of type C3.xlarge
, a run with 100 individuals takes 227 seconds, 225 individuals takes 293 seconds, and 400 individuals takes 372 seconds. This will allow some predictability in completion times. There is very little variation since we are doing 10MM ticks, regardless of the copying and innovation events involved.
A small test run with learning rates of 0.1
and 0.8
was encouraging, things look different for some observables and not others. I think we’re ready to go.
I also learned that the \(r=6, h=6\) case for trait trees is simply too large to do automorphism group calculations upon.
The following will be divided into 6 different machines, 2 each for population sizes 100, 225, 400. Each population size will finish at different times, and their server instances deactivated. The anticipated range is between 4 (size 100) and 6.5 days (size 400)