- Node functions: Currently, each node utilizes a step function based on a random threshold. Most neural networks use some combination of sigmoid and similar functions, so I'd like my neural networks to have use of those as well. However, I then saw a paper on Cartesian Genetic Programming (CGP) which unlike genetic programming (GP) uses a network structure (instead of a tree) to represent programs and emphasizes separating the genotype and the phenotypic representation. As I read this I thought about giving the nodes in my networks even more functions to use: how could I have functions that take in types that are not real numbers? How would nodes coordinate between their outputs and expected inputs if they're different types?
- Evolutionary Strategies: There seems to be some different definitions for this, but I take it to mean applying a genetic algorithm not just to the neural networks I'm developing, but also to the parameters that define the bounds on those networks. In particular, there's a method called Covariance Matrix Adaptation - Evolutionary Strategy (CMA-ES) which drives these changes in a way that reduces wasting effort on parameters that are closely correlated.
- Problem Classification: Currently my networks can handle an array of real numbers for their input and produce an array of real numbers for their output. I'd like the program to handle different forms of data and produce different types of output. What's more, once the program has solved several different types of problems, it should be able to tell from the data coming in (based on a classifier network) what kind of problem it is and create the initial parameters accordingly.
- Human-Assisted Selection: There are already several systems that create a random group of individuals, ask the user to select which ones they prefer, and make the next generation from that. I'd like to have the same ability with my program. On another level, a neural network could use the initial population of individuals as its input and the user's selection as its expected output to train it in making suggestions to the user.
- Multilevel Selection: By creating additional populations of individuals, more complex behavior can be generated...
It has started me thinking on a very interesting set of questions: Can you use neuroevolution in a cumulative fashion so that you train a network to do one thing and then train it to do something more advanced based on the initial training? It's the adage that you have to crawl before you can walk, and walk before you can run (even though one isn't use to do the next).
Are there learning tracks where you train a neural network the first thing on the track and that prepares it to learn the next thing, and so on? Or is it more of a network where each learning outcome is a node and the nodes feeding into the next layer of nodes all need to be learned before the next layer can be learned? This seems more logical to me. Not only does a person need to learn how to crawl, but they also must learn how to stand before they learn how to walk. Maybe a feed-forward network, since it's unlikely that something learned down the line would be required for an earlier learning node (though back-propagation does seem in effect to strengthen the earlier understandings).
The question then becomes, what starts the whole thing off? What are the first things that babies learn and must master before they can progress to the next stage of learning? Certainly as a person gains experience in multiple, cross-cutting fields (e.g. music, language), the inspiration and understanding to progress becomes more complex, but what about at the very beginning? Is it something that can be mapped out? Has someone already done this?