I have a program that can take an initial rule and an initial (or randomized) file and iterate a certain number of times giving me a new result.
the results would look something like:
{"version":1,"description":"Hyperbolic cellular field","rule":"B3/S23","state":0,"cells":[{"state":1,"path":[3]},{"state":1,"path":[]},{"state":1,"path":[1]},{"state":1,"path":[7]},{"state":1,"path":[5]},{"state":1,"path":[1,1]},{"state":1,"path":[3,2]},{"state":1,"path":[9]}]}
I want to be able to pick a rule out of a document which is a list of rules, run that rule through various starting positions for a certain amount of iterations and then save that new file. Of course stopping once computing gets slow and if the iterations get into a repeating pattern. I then want to be able to take the file and figure out the state 1 that's farthest way from the center (the state 1 with the most commas) and how many state 1 cells there are at the end, record these values and then repeat with a new initial configuration a couple of times and then move onto a new rule from that document, apply it to the same starting configurations, etc...
So far I have some commands that I think might be helpful, but I'm not sure how to put it all together:

cat huge.json | tr '}' '\n' | grep -c '"state":1'

zcat huge.json.gz | tr '}' '\n' | grep -c '"state":1'

man seq
man xargs
man tr
man awk

zcat huge.json.gz | tr '}' '\n' | tr -d -C ',\n' | awk '{print
length}' | sort -n | tail -1