|Created: 16 Mar 2014||Modified: 23 Jun 2017||BibTeX Entry||RIS Citation|
UPDATE: After finding a bug in the timing of innovation/mutation, I’m not using any of the previous data.
Designed to allow comparison of innovation rate + learning/prereq vs just learning/prereq. Learning rates are the same as saa-2, but a smaller set of trait spaces are examined (dropping the 4/4 pair).
2160 total runs
332 on the restart, Weds 3/26
Intended to get more samples for an overlap of SAA-5 and SAA-2. Only the largest and smallest trait space sizes are used, but all the learning rates, and both mutation and no mutation cases. For these parameter combinations, we will end up with 30 replications per param combo.
1920 total runs
1190 runs at 6:45pm Weds 3/26
SAA-8 finished quickly, so I’m going to explore larger population sizes. If I can examine the effects both of creating high-fidelity learning AND demography, that will be interesting.
Started 8:35am Sunday 3/30
Data moved back to Kimura in
Smaller sample size experiment, duplicates learning rates and other param values from SAA-2
Completed early morning Thursday 3/20.
Larger sample size experiment, duplicates other param values from SAA-2
4860 total runs
Killed it, and used just the 2430 runs for popsize 225.
Intended to test whether graphs really do change at high learning rates – testing whether learning environment really changes the cumulation of cultural information
1280 total runs
Given interesting phenomena at 0.7 and mainly 0.8 learning rates, I’m testing more of the high learning rates. The small state spaces are most sensitive because everything gets lost in the vastness of larger state spaces.
Started 9:50pm Sat 3/29
For finalized runs only:
Creating a movie from a bunch of DOT files works like this:
analytics/export-traitgraph-to-graphviz.py --experiment saa-2 --action sample --ssize 100 --directory tmp for d in `ls *.dot`; do ( dot -Tpng $d -o $d.png; echo "$d.png 300" >> build ); done ~/bin/makeQuickTime.py build test.mov